27,833 research outputs found

    Learning of Image Dehazing Models for Segmentation Tasks

    Full text link
    To evaluate their performance, existing dehazing approaches generally rely on distance measures between the generated image and its corresponding ground truth. Despite its ability to produce visually good images, using pixel-based or even perceptual metrics do not guarantee, in general, that the produced image is fit for being used as input for low-level computer vision tasks such as segmentation. To overcome this weakness, we are proposing a novel end-to-end approach for image dehazing, fit for being used as input to an image segmentation procedure, while maintaining the visual quality of the generated images. Inspired by the success of Generative Adversarial Networks (GAN), we propose to optimize the generator by introducing a discriminator network and a loss function that evaluates segmentation quality of dehazed images. In addition, we make use of a supplementary loss function that verifies that the visual and the perceptual quality of the generated image are preserved in hazy conditions. Results obtained using the proposed technique are appealing, with a favorable comparison to state-of-the-art approaches when considering the performance of segmentation algorithms on the hazy images.Comment: Accepted in EUSIPCO 201

    Time–Frequency Cepstral Features and Heteroscedastic Linear Discriminant Analysis for Language Recognition

    Get PDF
    The shifted delta cepstrum (SDC) is a widely used feature extraction for language recognition (LRE). With a high context width due to incorporation of multiple frames, SDC outperforms traditional delta and acceleration feature vectors. However, it also introduces correlation into the concatenated feature vector, which increases redundancy and may degrade the performance of backend classifiers. In this paper, we first propose a time-frequency cepstral (TFC) feature vector, which is obtained by performing a temporal discrete cosine transform (DCT) on the cepstrum matrix and selecting the transformed elements in a zigzag scan order. Beyond this, we increase discriminability through a heteroscedastic linear discriminant analysis (HLDA) on the full cepstrum matrix. By utilizing block diagonal matrix constraints, the large HLDA problem is then reduced to several smaller HLDA problems, creating a block diagonal HLDA (BDHLDA) algorithm which has much lower computational complexity. The BDHLDA method is finally extended to the GMM domain, using the simpler TFC features during re-estimation to provide significantly improved computation speed. Experiments on NIST 2003 and 2007 LRE evaluation corpora show that TFC is more effective than SDC, and that the GMM-based BDHLDA results in lower equal error rate (EER) and minimum average cost (Cavg) than either TFC or SDC approaches

    Latent Class Model with Application to Speaker Diarization

    Get PDF
    In this paper, we apply a latent class model (LCM) to the task of speaker diarization. LCM is similar to Patrick Kenny's variational Bayes (VB) method in that it uses soft information and avoids premature hard decisions in its iterations. In contrast to the VB method, which is based on a generative model, LCM provides a framework allowing both generative and discriminative models. The discriminative property is realized through the use of i-vector (Ivec), probabilistic linear discriminative analysis (PLDA), and a support vector machine (SVM) in this work. Systems denoted as LCM-Ivec-PLDA, LCM-Ivec-SVM, and LCM-Ivec-Hybrid are introduced. In addition, three further improvements are applied to enhance its performance. 1) Adding neighbor windows to extract more speaker information for each short segment. 2) Using a hidden Markov model to avoid frequent speaker change points. 3) Using an agglomerative hierarchical cluster to do initialization and present hard and soft priors, in order to overcome the problem of initial sensitivity. Experiments on the National Institute of Standards and Technology Rich Transcription 2009 speaker diarization database, under the condition of a single distant microphone, show that the diarization error rate (DER) of the proposed methods has substantial relative improvements compared with mainstream systems. Compared to the VB method, the relative improvements of LCM-Ivec-PLDA, LCM-Ivec-SVM, and LCM-Ivec-Hybrid systems are 23.5%, 27.1%, and 43.0%, respectively. Experiments on our collected database, CALLHOME97, CALLHOME00 and SRE08 short2-summed trial conditions also show that the proposed LCM-Ivec-Hybrid system has the best overall performance

    Improving the redistribution of the security lessons in healthcare: An evaluation of the Generic Security Template

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Context. The recurrence of past security breaches in healthcare showed that lessons had not been e effectively learned across different healthcare organisations. Recent studies have identified the need to improve learning from incidents and to share security knowledge to prevent future attacks. Generic Security Templates (GSTs) have been proposed to facilitate this knowledge transfer. The objective of this paper is to evaluate whether potential users in healthcare organisations can exploit the GST technique to share lessons learned from security incidents. Methodology. We conducted a series of case studies to evaluate GSTs. In particular, we used a GST for a security incident in the US Veterans’ A airs Administration to explore whether security lessons could be applied in a very differnt Chinese healthcare organisation. Results. The results showed that Chinese security professional accepted the use of GSTs and that cyber security lessons could be transferred to a Chinese healthcare organisation using this approach. The users also identified the weaknesses and strengths of GSTs, providing suggestions for future improvements. Conclusion. Generic Security Templates can be used to redistribute lessons learned from security incidents. Sharing cyber security lessons helps organisations consider their own practices and assess whether applicable security standards address concerns raised in previous breaches in other countries. The experience gained from this study provides the basis for future work in conducting similar studies in other healthcare organisations

    Floquet Chern Insulators of Light

    Full text link
    Achieving topologically-protected robust transport in optical systems has recently been of great interest. Most topological photonic structures can be understood by solving the eigenvalue problem of Maxwell's equations for a static linear system. Here, we extend topological phases into dynamically driven nonlinear systems and achieve a Floquet Chern insulator of light in nonlinear photonic crystals (PhCs). Specifically, we start by presenting the Floquet eigenvalue problem in driven two-dimensional PhCs and show it is necessarily non-Hermitian. We then define topological invariants associated with Floquet bands using non-Hermitian topological band theory, and show that topological band gaps with non-zero Chern number can be opened by breaking time-reversal symmetry through the driving field. Furthermore, we show that topological phase transitions between Floquet Chern insulators and normal insulators occur at synthetic Weyl points in a three-dimensional parameter space consisting of two momenta and the driving frequency. Finally, we numerically demonstrate the existence of chiral edge states at the interfaces between a Floquet Chern insulator and normal insulators, where the transport is non-reciprocal and uni-directional. Our work paves the way to further exploring topological phases in driven nonlinear optical systems and their optoelectronic applications, and our method of inducing Floquet topological phases is also applicable to other wave systems, such as phonons, excitons, and polaritons
    • …
    corecore