451 research outputs found

    Selected Problems in Graph Coloring

    Get PDF
    The Borodin–Kostochka Conjecture states that for a graph G, if ∆(G) ≥ 9 and ω(G) ≤ ∆(G) − 1, then χ(G) ≤ ∆(G) − 1. We prove the Borodin–Kostochka Conjecture for (P5, gem)-free graphs, i.e., graphs with no induced P5 and no induced K1 ∨P4. ForagraphGandt,k∈Z+ at-tonek-coloringofGisafunctionf:V(G)→ [k] such that |f(v)∩f(w)| \u3c d(v,w) for all distinct v,w ∈ V(G). The t-tone t chromatic number of G, denoted τt(G), is the minimum k such that G is t-tone k- colorable. For small values of t, we prove sharp or nearly sharp upper bounds on the t-tone chromatic number of various classes of sparse graphs. In particular, we determine τ2(G) exactly when mad(G) \u3c 12/5 and also determine τ2(G), up to a small additive constant, when G is outerplanar. Finally, we determine τt(Cn) exactly when t ∈ {3, 4, 5}

    Digital signal processing for coherent optical fibre communications

    Get PDF
    In this thesis investigations were performed into digital signal processing (DSP) algorithms for coherent optical fibre transmission systems, which provide improved performance with respect to conventional systems and algorithms. Firstly, an overview of coherent detection and coherent transmission systems is given. Experimental investigations were then performed into the performance of digital backpropagation for mitigating fibre nonlinearities in a dual-polarization quadrature phase shift keying (DP-QPSK) system over 7780 km and a dual-polarization 16- level quadrature amplitude modulation (DP-QAM16) system over 1600 km. It is noted that significant improvements in performance may be achieved for a nonlinear step-size greater than one span. An approximately exponential relationship was found between performance improvement in Q-factor and the number for required complex multipliers. DSP algorithms for polarization-switched quadrature phase shift keying (PS-QPSK) are then investigated. A novel two-part equalisation algorithm is proposed which provides singularity-free convergence and blind equalisation of PS-QPSK. This algorithm is characterised and its application to wavelength division multiplexed (WDM) transmission systems is discussed. The thesis concludes with an experimental comparison between a PS-QPSK transmission system and a conventional DP-QPSK system. For a 42.9 Gb/s WDM system, the use of PS-QPSK enabled an increase of reach of more than 30%. The resultant reach of 13,640 km was, at the time of publication, the longest transmission distance reported for 40 Gb/s transmission over an uncompensated link with standard fibre and optical amplification

    Realistic camera noise modeling with application to improved HDR synthesis

    Get PDF
    Abstract Due to the ongoing miniaturization of digital camera sensors and the steady increase of the “number of megapixels”, individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise (AWGN), only limited attention has gone to more realistic digital camera noise models. In this paper, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing, ... We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings

    Multimodal enhancement-fusion technique for natural images.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.This dissertation presents a multimodal enhancement-fusion (MEF) technique for natural images. The MEF is expected to contribute value to machine vision applications and personal image collections for the human user. Image enhancement techniques and the metrics that are used to assess their performance are prolific, and each is usually optimised for a specific objective. The MEF proposes a framework that adaptively fuses multiple enhancement objectives into a seamless pipeline. Given a segmented input image and a set of enhancement methods, the MEF applies all the enhancers to the image in parallel. The most appropriate enhancement in each image segment is identified, and finally, the differentially enhanced segments are seamlessly fused. To begin with, this dissertation studies targeted contrast enhancement methods and performance metrics that can be utilised in the proposed MEF. It addresses a selection of objective assessment metrics for contrast-enhanced images and determines their relationship with the subjective assessment of human visual systems. This is to identify which objective metrics best approximate human assessment and may therefore be used as an effective replacement for tedious human assessment surveys. A subsequent human visual assessment survey is conducted on the same dataset to ascertain image quality as perceived by a human observer. The interrelated concepts of naturalness and detail were found to be key motivators of human visual assessment. Findings show that when assessing the quality or accuracy of these methods, no single quantitative metric correlates well with human perception of naturalness and detail, however, a combination of two or more metrics may be used to approximate the complex human visual response. Thereafter, this dissertation proposes the multimodal enhancer that adaptively selects the optimal enhancer for each image segment. MEF focusses on improving chromatic irregularities such as poor contrast distribution. It deploys a concurrent enhancement pathway that subjects an image to multiple image enhancers in parallel, followed by a fusion algorithm that creates a composite image that combines the strengths of each enhancement path. The study develops a framework for parallel image enhancement, followed by parallel image assessment and selection, leading to final merging of selected regions from the enhanced set. The output combines desirable attributes from each enhancement pathway to produce a result that is superior to each path taken alone. The study showed that the proposed MEF technique performs well for most image types. MEF is subjectively favourable to a human panel and achieves better performance for objective image quality assessment compared to other enhancement methods

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature
    corecore