115 research outputs found

    Retina-Inspired and Physically Based Image Enhancement

    Get PDF
    Images and videos with good lightness and contrast are vital in several applications, where human experts make an important decision based on the imaging information, such as medical, security, and remote sensing applications. The well-known image enhancement methods include spatial and frequency enhancement techniques such as linear transformation, gamma correction, contrast stretching, histogram equalization and homomorphic filtering. Those conventional techniques are easy to implement but do not recover the exact colour of the images; hence they have limited application areas. Conventional image/video enhancement methods have been widely used with their different advantages and drawbacks; since the last century, there has been increased interest in retina-inspired techniques, e.g., Retinex and Cellular Neural Networks (CNN) as they attempt to mimic the human retina. Despite considerable advances in computer vision techniques, the human eye and visual cortex by far supersede the performance of state-of-the-art algorithms. This research aims to propose a retinal network computational model for image enhancement that mimics retinal layers, targeting the interconnectivity between the Bipolar receptive field and the Ganglion receptive field. The research started by enhancing two state-of-the-art image enhancement methods through their integration with image formation models. In particular, physics-based features (e.g. Spectral Power Distribution of the dominant illuminate in the scene and the Surface Spectral Reflectance of the objects contained in the image are estimated and used as inputs for the enhanced methods). The results show that the proposed technique can adapt to scene variations such as a change in illumination, scene structure, camera position and shadowing. It gives superior performance over the original model. The research has successfully proposed a novel Ganglion Receptive Field (GRF) computational model for image enhancement. Instead of considering only the interactions between each pixel and its surroundings within a single colour layer, the proposed framework introduces the interaction between different colour layers to mimic the retinal neural process; to better mimic the centre-surround retinal receptive field concept, different photoreceptors' outputs are combined. Additionally, this thesis proposed a new contrast enhancement method based on Weber's Law. The objective evaluation shows the superiority of the proposed Ganglion Receptive Field (GRF) method over state-of-the-art methods. The contrast restored image generated by the GRF method achieved the highest performance in contrast enhancement and luminance restoration; however, it achieved less performance in structure preservation, which confirms the physiological studies that observe the same behaviour from the human visual system

    Endoscopic Vision Augmentation Using Multiscale Bilateral-Weighted Retinex for Robotic Surgery

    Get PDF
    医疗机器人手术视觉是微创外科手术成功与否的关键所在。由于手术器械医学电子内镜自身内在的局限性,导致了手术视野不清晰、光照不均、多烟雾等诸多问题,使得外科医生无法准确快速感知与识别人体内部器官中的神经血管以及病灶位置等结构信息,这无疑增加了手术风险和手术时间。针对这些手术视觉问题,本论文提出了一种基于双边滤波权重分析的多尺度Retinex模型方法,对达芬奇医疗机器人手术过程中所采集到的病患视频进行处理与分析。经过外科医生对实验结果的主观评价,一致认为该方法能够大幅度地增强手术视野质量;同时客观评价实验结果表明本论文所提出方法优于目前计算机视觉领域内的图像增强与恢复方法。 厦门大学信息科学与技术学院计算机科学系罗雄彪教授为本文第一作者。【Abstract】Endoscopic vision plays a significant role in minimally invasive surgical procedures. The visibility and maintenance of such direct in-situ vision is paramount not only for safety by preventing inadvertent injury, but also to improve precision and reduce operating time. Unfortunately, endoscopic vision is unavoidably degraded due to illumination variations during surgery. This work aims to restore or augment such degraded visualization and quantitatively evaluate it during robotic surgery. A multiscale bilateral-weighted retinex method is proposed to remove non-uniform and highly directional illumination and enhance surgical vision, while an objective noreference image visibility assessment method is defined in terms of sharpness, naturalness, and contrast, to quantitatively and objectively evaluate endoscopic visualization on surgical video sequences. The methods were validated on surgical data, with the experimental results showing that our method outperforms existent retinex approaches. In particular, the combined visibility was improved from 0.81 to 1.06, while three surgeons generally agreed that the results were restored with much better visibility.The authors thank the assistance of Dr. Stephen Pautler for facilitating the data acquisition, Dr. A. Jonathan McLeod and Dr.Uditha Jayarathne for helpful discussions

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    Colour coded

    Get PDF
    This 300 word publication to be published by the Society of Dyers and Colourists (SDC) is a collection of the best papers from a 4-year European project that has considered colour from the perspective of both the arts and sciences.The notion of art and science and the crossovers between the two resulted in application and funding for cross disciplinary research to host a series of training events between 2006 and 2010 Marie Curie Conferences & Training Courses (SCF) Call Identifier: FP6-Mobility-4, Euros 532,363.80 CREATE – Colour Research for European Advanced Technology Employment. The research crossovers between the fields of art, science and technology was also a subject that was initiated through Bristol’s Festival if Ideas events in May 2009. The author coordinated and chaired an event during which the C.P Snow lecture “On Two Cultures’ (1959) was re-presented by Actor Simon Cook and then a lecture made by Raymond Tallis on the notion of the Polymath. The CREATE project has a worldwide impact for researchers, academics and scientists. Between January and October 2009, the site has received 221, 414 visits. The most popular route into the site is via the welcome page. The main groups of visitors originate in the UK (including Northern Ireland), Italy, France, Finland, Norway, Hungary, USA, Finland and Spain. A basic percentage breakdown of the traffic over ten months indicates: USA -15%; UK - 16%; Italy - 13%; France -12%; Hungary - 10%; Spain - 6%; Finland - 9%; Norway - 5%. The remaining approximate 14% of visitors are from other countries including Belgium, The Netherlands and Germany (approx 3%). A discussion group has been initiated by the author as part of the CREATE project to facilitate an ongoing dialogue between artists and scientists. http://createcolour.ning.com/group/artandscience www.create.uwe.ac.uk.Related papers to this research: A report on the CREATE Italian event: Colour in cultural heritage.C. Parraman, A. Rizzi, ‘Developing the CREATE network in Europe’, in Colour in Art, Design and Nature, Edinburgh, 24 October 2008.C. Parraman, “Mixing and describing colour”. CREATE (Training event 1), France, 2008

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future
    corecore