113 research outputs found
Scene-Dependency of Spatial Image Quality Metrics
This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality.
The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes).
Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals.
This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs.
The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy.
The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications
Algorithms for the enhancement of dynamic range and colour constancy of digital images & video
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities.
Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities.
The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises.
The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
Perception and Mitigation of Artifacts in a Flat Panel Tiled Display System
Flat panel displays continue to dominate the display market. Larger, higher resolution flat panel displays are now in demand for scientific, business, and entertainment purposes. Manufacturing such large displays is currently difficult and expensive. Alternately, larger displays can be constructed by tiling smaller flat panel displays. While this approach may prove to be more cost effective, appropriate measures must be taken to achieve visual seamlessness and uniformity.
In this project we conducted a set of experiments to study the perception and mitigation of image artifacts in tiled display systems. In the first experiment we used a prototype tiled display to investigate its current viability and to understand what critical perceptible visual artifacts exist in this system. Based on word frequencies of the survey responses, the most disruptive artifacts perceived were ranked. On the basis of these findings, we conducted a second experiment to test the effectiveness of image processing algorithms designed to mitigate some of the most distracting artifacts without changing the physical properties of the display system. Still images were processed using several algorithms and evaluated by observers using magnitude scaling. Participants in the experiment noticed statistically significant improvement in image quality from one of the two algorithms. Similar testing should be conducted to evaluate the effectiveness of the algorithms on video content. While much work still needs to be done, the contributions of this project should enable the development of an image processing pipeline to mitigate perceived artifacts in flat panel display systems and provide the groundwork for extending such a pipeline to realtime applications
DEVELOPMENT OF SOLID TUNABLE OPTICS FOR ULTRA-MINIATURE IMAGING SYSTEMS
Ph.DDOCTOR OF PHILOSOPH
Digital image processing for prognostic and diagnostic clinical pathology
When digital imaging and image processing methods are applied to clinical diagnostic
and prognostic needs, the methods can be seen to increase human understanding and
provide objective measurements. Most current clinical applications are limited to
providing subjective information to healthcare professionals rather than providing
objective measures. This Thesis provides detail of methods and systems that have been
developed both for objective and subjective microscopy applications. A system
framework is presented that provides a base for the development of microscopy imaging
systems. This practical framework is based on currently available hardware and
developed with standard software development tools. Image processing methods are
applied to counter optical limitations of the bright field microscope, automating the
system and allowing for unsupervised image capture and analysis.
Current literature provides evidence that 3D visualisation has provided increased
insight and application in many clinical areas. There have been recent advancements in
the use of 3D visualisation for the study of soft tissue structures, but its clinical
application within histology remains limited. Methods and applications have been
researched and further developed which allow for the 3D reconstruction and
visualisation of soft tissue structures using microtomed serial histological sections
specimens. A system has been developed suitable for this need is presented giving
considerations to image capture, data registration and 3D visualisation, requirements.
The developed system has been used to explore and increase 3D insight on clinical
samples.
The area of automated objective image quantification of microscope slides
presents the allure of providing objective methods replacing existing objective and
subjective methods, increasing accuracy and rsducinq manual burden. One such
existing objective test is DNA Image Ploidy which seeks to characterise cancer by the
measurement of DNA content within individual cell nuclei, an accepted but manually
burdensome method. The main novelty of the work completed lies in the development of
an automated system for DNA Image Ploidy measurement, combining methods for
automatic specimen focus, segmentation, parametric extraction and the implementation
of an automated cell type classification system.
A consideration for any clinical image processing system is the correct sampling
of the tissue under study. VVhile the image capture requirements for both objective
systems and subjective systems are similar there is also an important link between the
3D structures of the tissue. 3D understanding can aid in decisions regarding the
sampling criteria of objective tests for as although many tests are completed in the 2D
realm the clinical samples are 3D objects. Cancers such as Prostate and Breast cancer
are known to be multi-focal, with areas of seeming physically, independent areas of
disease within a single site. It is not possible to understand the true 3D nature of the
samples using 2D micro-tomed sections in isolation from each other. The 3D systems
described in this report provide a platform of the exploration of the true multi focal nature
of disease soft tissue structures allowing for the sampling criteria of objective tests such
as DNA Image Ploidy to be correctly set.
For the Automated DNA Image Ploidy and the 3D reconstruction and
visualisation systems, clinical review has been completed to test the increased insights
provided. Datasets which have been reconstructed from microtomed serial sections and
visualised with the developed 3D system area presented. For the automated DNA Image
Ploidy system, the developed system is compared with the existing manual method to
qualify the quality of data capture, operational speed and correctness of nuclei
classification.
Conclusions are presented for the work that has been completed and discussion
given as to future areas of research that could be undertaken, extending the areas of
study, increasing both clinical insight and practical application
The effect of scene content on image quality
Device-dependent metrics attempt to predict image quality from an ‘average signal’, usually embodied on test targets. Consequently, the metrics perform well on individual ‘average looking’ scenes and test targets, but provide lower correlation with subjective assessments when working with a variety of scenes with different than ‘average signal’ characteristics. This study considers the issues of scene dependency on image quality. This study aims to quantify the change in quality with scene contents, to research the problem of scene dependency in relation to devicedependent image quality metrics and to provide a solution to it.
A novel subjective scaling method was developed in order to derive individual attribute scales, using the results from the overall image quality assessments. This was an analytical top-down approach, which does not require separate scaling of individual attributes and does not assume that the attribute is not independent from other attributes. From the measurements, interval scales were created and the effective scene dependency factor was calculated, for each attribute. Two device-dependent image quality metrics, the Effective Pictorial Information Capacity (EPIC) and the Perceived Information Capacity (PIC), were used to predict subjective image quality for a test set that varied in sharpness and noisiness. These metrics were found to be reliable predictors of image quality. However, they were not equally successful in predicting quality for different images with varying scene content.
Objective scene classification was thus considered and employed in order to deal with the problem of scene dependency in device-dependent metrics. It used objective scene descriptors, which correlated with subjective criteria on scene susceptibility. This process resulted in the development of a fully automatic classification of scenes into ‘standard’ and ‘non-standard’ groups, and the result allows the calculation of calibrated metric values for each group. The classification and metric calibration performance was quite encouraging, not only because it improved mean image quality predictions from all scenes, but also because it catered for nonstandard scenes, which originally produced low correlations. The findings indicate that the proposed automatic scene classification method has great potential for tackling the problem of scene dependency, when modelling device-dependent image quality. In addition, possible further studies of objective scene classification are discussed
Head tracking two-image 3D television displays
The research covered in this thesis encompasses the design of novel 3D displays, a
consideration of 3D television requirements and a survey of autostereoscopic methods
is also presented. The principle of operation of simple 3D display prototypes is
described, and design of the components of optical systems is considered. A
description of an appropriate non-contact infrared head tracking method suitable for
use with 3D television displays is also included.
The thesis describes how the operating principle of the displays is based upon a twoimage
system comprising a pair of images presented to the appropriate viewers' eyes.
This is achieved by means of novel steering optics positioned behind a direct view
liquid crystal display (LCD) that is controlled by a head position tracker. Within the
work, two separate prototypes are described, both of which provide 3D to a single
viewer who has limited movement. The thesis goes on to describe how these
prototypes can be developed into a multiple-viewer display that is suitable for
television use.
A consideration of 3D television requirements is documented showing that glassesfree
viewing (autostereoscopic), freedom of viewer movement and practical designs
are important factors for 3D television displays.
The displays are novel in design in several important aspects that comply with the
requirements for 3D television. Firstly they do not require viewers to wear special
glasses, secondly the displays allow viewers to move freely when viewing and finally
the design of the displays is practical with a housing size similar to modem television
sets and a cost that is not excessive. Surveys of other autostereoscopic methods
included within the work suggest that no contemporary 3D display offers all of these
important factors
Accessible software frameworks for reproducible image analysis of host-pathogen interactions
Um die Mechanismen hinter lebensgefährlichen Krankheiten zu verstehen, müssen die zugrundeliegenden Interaktionen zwischen den Wirtszellen und krankheitserregenden Mikroorganismen bekannt sein. Die kontinuierlichen Verbesserungen in bildgebenden Verfahren und Computertechnologien ermöglichen die Anwendung von Methoden aus der bildbasierten Systembiologie, welche moderne Computeralgorithmen benutzt um das Verhalten von Zellen, Geweben oder ganzen Organen präzise zu messen. Um den Standards des digitalen Managements von Forschungsdaten zu genügen, müssen Algorithmen den FAIR-Prinzipien (Findability, Accessibility, Interoperability, and Reusability) entsprechen und zur Verbreitung ebenjener in der wissenschaftlichen Gemeinschaft beitragen. Dies ist insbesondere wichtig für interdisziplinäre Teams bestehend aus Experimentatoren und Informatikern, in denen Computerprogramme zur Verbesserung der Kommunikation und schnellerer Adaption von neuen Technologien beitragen können. In dieser Arbeit wurden daher Software-Frameworks entwickelt, welche dazu beitragen die FAIR-Prinzipien durch die Entwicklung von standardisierten, reproduzierbaren, hochperformanten, und leicht zugänglichen Softwarepaketen zur Quantifizierung von Interaktionen in biologischen System zu verbreiten. Zusammenfassend zeigt diese Arbeit wie Software-Frameworks zu der Charakterisierung von Interaktionen zwischen Wirtszellen und Pathogenen beitragen können, indem der Entwurf und die Anwendung von quantitativen und FAIR-kompatiblen Bildanalyseprogrammen vereinfacht werden. Diese Verbesserungen erleichtern zukünftige Kollaborationen mit Lebenswissenschaftlern und Medizinern, was nach dem Prinzip der bildbasierten Systembiologie zur Entwicklung von neuen Experimenten, Bildgebungsverfahren, Algorithmen, und Computermodellen führen wird
- …