10,793 research outputs found

    Camera System Performance Derived from Natural Scenes

    Get PDF
    The Modulation Transfer Function (MTF) is a well-established measure of camera system performance, commonly employed to characterize optical and image capture systems. It is a measure based on Linear System Theory; thus, its use relies on the assumption that the system is linear and stationary. This is not the case with modern-day camera systems that incorporate non-linear image signal processes (ISP) to improve the output image. Non-linearities result in variations in camera system performance, which are dependent upon the specific input signals. This paper discusses the development of a novel framework, designed to acquire MTFs directly from images of natural complex scenes, thus making the use of traditional test charts with set patterns redundant. The framework is based on extraction, characterization and classification of edges found within images of natural scenes. Scene derived performance measures aim to characterize non-linear image processes incorporated in modern cameras more faithfully. Further, they can produce ‘live’ performance measures, acquired directly from camera feeds

    Adaptive Optics Scanning Ophthalmoscopy with Annular Pupils

    Get PDF
    Annular apodization of the illumination and/or imaging pupils of an adaptive optics scanning light ophthalmoscope (AOSLO) for improving transverse resolution was evaluated using three different normalized inner radii (0.26, 0.39 and 0.52). In vivo imaging of the human photoreceptor mosaic at 0.5 and 10° from fixation indicates that the use of an annular illumination pupil and a circular imaging pupil provides the most benefit of all configurations when using a one Airy disk diameter pinhole, in agreement with the paraxial confocal microscopy theory. Annular illumination pupils with 0.26 and 0.39 normalized inner radii performed best in terms of the narrowing of the autocorrelation central lobe (between 7 and 12%), and the increase in manual and automated photoreceptor counts (8 to 20% more cones and 11 to 29% more rods). It was observed that the use of annular pupils with large inner radii can result in multi-modal cone photoreceptor intensity profiles. The effect of the annular masks on the average photoreceptor intensity is consistent with the Stiles-Crawford effect (SCE). This indicates that combinations of images of the same photoreceptors with different apodization configurations and/or annular masks can be used to distinguish cones from rods, even when the former have complex multi-modal intensity profiles. In addition to narrowing the point spread function transversally, the use of annular apodizing masks also elongates it axially, a fact that can be used for extending the depth of focus of techniques such as adaptive optics optical coherence tomography (AOOCT). Finally, the positive results from this work suggest that annular pupil apodization could be used in refractive or catadioptric adaptive optics ophthalmoscopes to mitigate undesired back-reflections

    Spectral Visualization Sharpening

    Full text link
    In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.Comment: Symposium of Applied Perception'1

    An adaptive optics system for astronomical image sharpening

    Get PDF
    Images of stars in the focal plane of large astronomical telescopes are many times larger than the images expected from diffraction theory. Much of this blurring of the image is due to variations in the optical properties of the atmosphere above the observatory which for small aperture telescopes causes motion of the image, and for large aperture telescopes causes the image to break up into a number of smaller sub-images. This thesis describes a prototype adaptive system which is designed to sharpen astronomical images in real-time. The sharpening is achieved by removing the atmospherically induced motions of the image with servo-looped plane mirrors driven by piezo-electric actuators. The results of real-time sharpening obtained with the adaptive system at the William Herschel Telescope (WET) are presented along with the results of an investigation into the characteristics of the atmospheric limitations at the La Palma observatory site

    Fourier-based image sharpness sensor for adaptive optics correction

    Get PDF
    Adaptive optics reduces undesirable turbulence effects present during propagation and imaging through the atmosphere or another random medium. Within an adaptive optics system, wavefront sensing determines the incoming wavefront errors. Image sharpening is one method of wavefront sensing where the sharpness value is measured from the image intensity based on a given sharpness metric. The wavefront correction device is then perturbed until the sharpness value is maximized. The key to image sharpening is defining sharpness with a sharpness metric that reaches a maximum when wavefront error is zero. Present image sharpness metrics often use the image intensity. In contrast, this dissertation introduces four novel sharpness metrics based on the Fourier transform of the image. Since high spatial frequencies carry information about the image’s edges and fine details, taking the Fourier transform and maximizing the high spatial frequencies sharpens the image. Coherence of the illumination source and the sharpness metric choice determine which of the presented optical system configurations to use. Performances of the Fourier-based sharpness metrics are observed and compared by measuring the sharpness value while adding defocus to the system. If the sharpness value reaches a maximum with zero wavefront error then the sharpness metric is successful. This investigation continues by adding astigmatism, coma, and spherical aberration and measuring the sharpness value to see the affect of these higher order aberrations. The sharpness metrics are then implemented into a simple manual closed-loop correction system. This dissertation presents successful performance results of these novel Fourier-based sharpness metrics showing great promise for use in adaptive optics correction

    Camera Spatial Frequency Response Derived from Pictorial Natural Scenes

    Get PDF
    Camera system performance is a prominent part of many aspects of imaging science and computer vision. There are many aspects to camera performance that determines how accurately the image represents the scene, including measurements of colour accuracy, tone reproduction, geometric distortions, and image noise evaluation. The research conducted in this thesis focuses on the Modulation Transfer Function (MTF), a widely used camera performance measurement employed to describe resolution and sharpness. Traditionally measured under controlled conditions with characterised test charts, the MTF is a measurement restricted to laboratory settings. The MTF is based on linear system theory, meaning the input to output must follow a straightforward correlation. Established methods for measuring the camera system MTF include the ISO12233:2017 for measuring the edge-based Spatial Frequency Response (e-SFR), a sister measure of the MTF designed for measuring discrete systems. Many modern camera systems incorporate non-linear, highly adaptive image signal processing (ISP) to improve image quality. As a result, system performance becomes scene and processing dependant, adapting to the scene contents captured by the camera. Established test chart based MTF/SFR methods do not describe this adaptive nature; they only provide the response of the camera to a test chart signal. Further, with the increased use of Deep Neural Networks (DNN) for image recognition tasks and autonomous vision systems, there is an increased need for monitoring system performance outside laboratory conditions in real-time, i.e. live-MTF. Such measurements would assist in monitoring the camera systems to ensure they are fully operational for decision critical tasks. This thesis presents research conducted to develop a novel automated methodology that estimates the standard e-SFR directly from pictorial natural scenes. This methodology has the potential to produce scene dependant and real-time camera system performance measurements, opening new possibilities in imaging science and allowing live monitoring/calibration of systems for autonomous computer vision applications. The proposed methodology incorporates many well-established image processes, as well as others developed for specific purposes. It is presented in two parts. Firstly, the Natural Scene derived SFR (NS-SFR) are obtained from isolated captured scene step-edges, after verifying that these edges have the correct profile for implementing into the slanted-edge algorithm. The resulting NS-SFRs are shown to be a function of both camera system performance and scene contents. The second part of the methodology uses a series of derived NS-SFRs to estimate the system e-SFR, as per the ISO12233 standard. This is achieved by applying a sequence of thresholds to segment the most likely data corresponding to the system performance. These thresholds a) group the expected optical performance variation across the imaging circle within radial distance segments, b) obtain the highest performance NS-SFRs per segment and c) select the NS-SFRs with input edge and region of interest (ROI) parameter ranges shown to introduce minimal e-SFR variation. The selected NS-SFRs are averaged per radial segment to estimate system e-SFRs across the field of view. A weighted average of these estimates provides an overall system performance estimation. This methodology is implemented for e-SFR estimation of three characterised camera systems, two near-linear and one highly non-linear. Investigations are conducted using large, diverse image datasets as well as restricting scene content and the number of images used for the estimation. The resulting estimates are comparable to ISO12233 e-SFRs derived from test chart inputs for the near-linear systems. Overall estimate stays within one standard deviation of the equivalent test chart measurement. Results from the highly non-linear system indicate scene and processing dependency, potentially leading to a more representative SFR measure than the current chart-based approaches for such systems. These results suggest that the proposed method is a viable alternative to the ISO technique
    corecore