29 research outputs found

    Development of a new resolution enhancement technology for medical liquid crystal displays

    Get PDF
    金æČąć€§ć­Šć€§ć­Šé™ąćŒ»ć­Šçł»ç ”ç©¶ç§‘é‡ć­ćŒ»ç™‚æŠ€èĄ“ć­ŠA new resolution enhancement technology that used independent sub-pixel driving method was developed for medical monochrome liquid crystal displays (LCDs). Each pixel of monochrome LCDs, which employ color liquid crystal panels with color filters removed, consists of three sub-pixels. In the new LCD system implemented with this technology, sub-pixel intensities were modulated according to detailed image information, and consequently resolution was enhanced three times. In addition, combined with adequate resolution improvement by image data processing, horizontal and vertical resolution properties were balanced. Thus the new technology realized 9 mega-pixels (MP) ultra-high resolution out of 3MP LCD. Physical measurements and perceptual evaluations proved that the achieved 9MP (through our new technology) was appropriate and efficient to depict finer anatomical structures such as micro calcifications in mammography

    Development of a new resolution enhancement technology for medical liquid crystal displays

    Full text link

    Characteristics of flight simulator visual systems

    Get PDF
    The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality

    Flat panel display signal processing

    Get PDF
    Televisions (TVs) have shown considerable technological progress since their introduction almost a century ago. Starting out as small, dim and monochrome screens in wooden cabinets, TVs have evolved to large, bright and colorful displays in plastic boxes. It took until the turn of the century, however, for the TV to become like a ‘picture on the wall’. This happened when the bulky Cathode Ray Tube (CRT) was replaced with thin and light-weight Flat Panel Displays (FPDs), such as Liquid Crystal Displays (LCDs) or Plasma Display Panels (PDPs). However, the TV system and transmission formats are still strongly coupled to the CRT technology, whereas FPDs use very different principles to convert the electronic video signal to visible images. These differences result in image artifacts that the CRT never had, but at the same time provide opportunities to improve FPD image quality beyond that of the CRT. This thesis presents an analysis of the properties of flat panel displays, their relation to image quality, and video signal processing algorithms to improve the quality of the displayed images. To analyze different types of displays, the display signal chain is described using basic principles common to all displays. The main function of a display is to create visible images (light) from an electronic signal (video), requiring display chain functions like opto-electronic effect, spatial and temporal addressing and reconstruction, and color synthesis. The properties of these functions are used to describe CRT, LCDs, and PDPs, showing that these displays perform the same functions, using different implementations. These differences have a number of consequences, that are further investigated in this thesis. Spatial and temporal aspects, corresponding to ‘static’ and ‘dynamic’ resolution respectively, are covered in detail. Moreover, video signal processing is an essential part of the display signal chain for FPDs, because the display format will in general no longer match the source format. In this thesis, it is investigated how specific FPD properties, especially related to spatial and temporal addressing and reconstruction, affect the video signal processing chain. A model of the display signal chain is presented, and applied to analyze FPD spatial properties in relation to static resolution. In particular, the effect of the color subpixels, that enable color image reproduction in FPDs, is analyzed. The perceived display resolution is strongly influenced by the color subpixel arrangement. When taken into account in the signal chain, this improves the perceived resolution on FPDs, which clearly outperform CRTs in this respect. The cause and effect of this improvement, also for alternative subpixel arrangements, is studied using the display signal model. However, the resolution increase cannot be achieved without video processing. This processing is efficiently combined with image scaling, which is always required in the FPD display signal chain, resulting in an algorithm called ‘subpixel image scaling’. A comparison of the effects of subpixel scaling on several subpixel arrangements shows that the largest increase in perceived resolution is found for two-dimensional subpixel arrangements. FPDs outperform CRTs with respect to static resolution, but not with respect to ‘dynamic resolution’, i.e. the perceived resolution of moving images. Life-like reproduction of moving images is an important requirement for a TV display, but the temporal properties of FPDs cause artifacts in moving images (‘motion artifacts’), that are not found in CRTs. A model of the temporal aspects of the display signal chain is used to analyze dynamic resolution and motion artifacts on several display types, in particular LCD and PDP. Furthermore, video signal processing algorithms are developed that can reduce motion artifacts and increase the dynamic resolution. The occurrence of motion artifacts is explained by the fact that the human visual system tracks moving objects. This converts temporal effects on the display into perceived spatial effects, that can appear in very different ways. The analysis shows how addressing mismatches in the chain cause motion-dependent misalignment of image data, e.g. resulting in the ‘dynamic false contour’ artifact in PDPs. Also, non-ideal temporal reconstruction results in ‘motion blur’, i.e. a loss of sharpness of moving images, which is typical for LCDs. The relation between motion blur, dynamic resolution, and temporal properties of LCDs is analyzed using the display signal model in the temporal (frequency) domain. The concepts of temporal aperture, motion aperture and temporal display bandwidth are introduced, which enable characterization of motion blur in a simple and direct way. This is applied to compare several motion blur reduction methods, based on modified display design and driving. This thesis further describes the development of several video processing algorithms that can reduce motion artifacts. It is shown that the motion of objects in the image plays an essential role in these algorithms, i.e. they require motion estimation and compensation techniques. In LCDs, video processing for motion artifact reduction involves a compensation for the temporal reconstruction characteristics of the display, leading to the ‘motion compensated inverse filtering’ algorithm. The display chain model is used to analyze this algorithm, and several methods to increase its performance are presented. In PDPs, motion artifact reduction can be achieved with ‘motion compensated subfield generation’, for which an advanced algorithm is presented

    Computational multi-spectral video imaging

    Full text link
    Multi-spectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multi-spectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra, to compute the multi-spectral image. We experimentally demonstrated spectral resolution of 9.6nm within the visible band (430nm to 718nm). We further show that the spatial resolution is enhanced by over 30% compared to the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Furthermore, our camera is able to computationally trade-off spectral resolution against the field of view in software without any change in hardware as long as sufficient sensor pixels are utilized for information encoding. Since no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques

    Evaluation of changes in image appearance with changes in displayed image size

    Get PDF
    This research focused on the quantification of changes in image appearance when images are displayed at different image sizes on LCD devices. The final results provided in calibrated Just Noticeable Differences (JNDs) on relevant perceptual scales, allowing the prediction of sharpness and contrast appearance with changes in the displayed image size. A series of psychophysical experiments were conducted to enable appearance predictions. Firstly, a rank order experiment was carried out to identify the image attributes that were most affected by changes in displayed image size. Two digital cameras, exhibiting very different reproduction qualities, were employed to capture the same scenes, for the investigation of the effect of the original image quality on image appearance changes. A wide range of scenes with different scene properties was used as a test-set for the investigation of image appearance changes with scene type. The outcomes indicated that sharpness and contrast were the most important attributes for the majority of scene types and original image qualities. Appearance matching experiments were further conducted to quantify changes in perceived sharpness and contrast with respect to changes in the displayed image size. For the creation of sharpness matching stimuli, a set of frequency domain filters were designed to provide equal intervals in image quality, by taking into account the system’s Spatial Frequency Response (SFR) and the observation distance. For the creation of contrast matching stimuli, a series of spatial domain S-shaped filters were designed to provide equal intervals in image contrast, by gamma adjustments. Five displayed image sizes were investigated. Observers were always asked to match the appearance of the smaller version of each stimulus to its larger reference. Lastly, rating experiments were conducted to validate the derived JNDs in perceptual quality for both sharpness and contrast stimuli. Data obtained by these experiments finally converted into JND scales for each individual image attribute. Linear functions were fitted to the final data, which allowed the prediction of image appearance of images viewed at larger sizes than these investigated in this research

    The effect of scene content on image quality

    Get PDF
    Device-dependent metrics attempt to predict image quality from an ‘average signal’, usually embodied on test targets. Consequently, the metrics perform well on individual ‘average looking’ scenes and test targets, but provide lower correlation with subjective assessments when working with a variety of scenes with different than ‘average signal’ characteristics. This study considers the issues of scene dependency on image quality. This study aims to quantify the change in quality with scene contents, to research the problem of scene dependency in relation to devicedependent image quality metrics and to provide a solution to it. A novel subjective scaling method was developed in order to derive individual attribute scales, using the results from the overall image quality assessments. This was an analytical top-down approach, which does not require separate scaling of individual attributes and does not assume that the attribute is not independent from other attributes. From the measurements, interval scales were created and the effective scene dependency factor was calculated, for each attribute. Two device-dependent image quality metrics, the Effective Pictorial Information Capacity (EPIC) and the Perceived Information Capacity (PIC), were used to predict subjective image quality for a test set that varied in sharpness and noisiness. These metrics were found to be reliable predictors of image quality. However, they were not equally successful in predicting quality for different images with varying scene content. Objective scene classification was thus considered and employed in order to deal with the problem of scene dependency in device-dependent metrics. It used objective scene descriptors, which correlated with subjective criteria on scene susceptibility. This process resulted in the development of a fully automatic classification of scenes into ‘standard’ and ‘non-standard’ groups, and the result allows the calculation of calibrated metric values for each group. The classification and metric calibration performance was quite encouraging, not only because it improved mean image quality predictions from all scenes, but also because it catered for nonstandard scenes, which originally produced low correlations. The findings indicate that the proposed automatic scene classification method has great potential for tackling the problem of scene dependency, when modelling device-dependent image quality. In addition, possible further studies of objective scene classification are discussed

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    A quantitative analysis of a self-emitting thermal IR scene simulation system

    Get PDF
    A quantitative evaluation is performed in which the imaging characteristics of a selfemitting thermal infrared scene simulation system are analyzed. The simulation system is comprised of an energy source (an Argon laser), optics, a spatial light modulator for image generation in the visible wavelength energy, and an transducer for conversion of the visible wavelength energy image to a thermal IR image. After construction of the simulation system, the performance of the simulation system and its components is analyzed by measurement of : (1) the Modulation Transfer Function, (2) the temporal response, (3) the maximum thermal contrast, and (4) the Noise Equivalent Delta Temperature. Additionally, an evaluation was made of the performance of the infrared imaging system used to view the simulation system output imagesby measurement of its Modulation Transfer Function and Noise Equivalent Delta Temperature. The optimum area of concentration for overall system improvement has been identified for future developmental work

    Metasurfaces: Beyond Diffractive and Refractive Optics

    Get PDF
    Optical metasurfaces are a category of thin diffractive optical elements, fabricated using the standard micro- and nano-fabrication techniques. They provide new ways of controlling the flow of light based on various properties such as polarization, wavelength, and propagation direction. In addition, their compatibility with standard micro-fabrication techniques and compact form factor allows for the development of several novel platforms for the design and implementation of various complicated optical elements and systems. In this thesis, I first give a short overview and a brief history of the works on optical metasurfaces. Then I discuss the capabilities of metasurfaces in controlling the polarization and phase of light, and showcase their potential applications through the cases of polarimetric imaging and vectorial holography. Then, a discussion of the chromatic dispersion in optical metasurfaces is given, followed by three methods that can be utilized to design metasurfaces working at multiple discrete wavelengths. As a potential application of such metasurfaces, I present results of using them as objective lenses in two-photon microscopy. In addition, I discuss how metasurfaces enable the at-will control of chromatic dispersion in diffractive optical elements, demonstrate metasurfaces with controlled dispersion, and provide a discussion of their limitations. Integration of multiple metasurfaces into metasystems allows for implementation of complicated optical functions such as imaging and spectrometry. In this regard, I present several examples of how such metasystems can be designed, fabricated, and utilized to provide wide field of view imaging and projection, microelectromechanically tunable lenses, optical spectrometers, and retroreflectors. I conclude with an outlook on where metasurfaces can be most useful, and what limitations should be overcome before they can find wide-spread application.</p
    corecore