1,843 research outputs found

    DeepWSD: Projecting Degradations in Perceptual Space to Wasserstein Distance in Deep Feature Space

    Full text link
    Existing deep learning-based full-reference IQA (FR-IQA) models usually predict the image quality in a deterministic way by explicitly comparing the features, gauging how severely distorted an image is by how far the corresponding feature lies from the space of the reference images. Herein, we look at this problem from a different viewpoint and propose to model the quality degradation in perceptual space from a statistical distribution perspective. As such, the quality is measured based upon the Wasserstein distance in the deep feature domain. More specifically, the 1DWasserstein distance at each stage of the pre-trained VGG network is measured, based on which the final quality score is performed. The deep Wasserstein distance (DeepWSD) performed on features from neural networks enjoys better interpretability of the quality contamination caused by various types of distortions and presents an advanced quality prediction capability. Extensive experiments and theoretical analysis show the superiority of the proposed DeepWSD in terms of both quality prediction and optimization.Comment: ACM Multimedia 2022 accepted thesi

    Study on River Migration and Stable Water Supply Countermeasure in the Reach of Kaoping Weir

    Get PDF
    Source: ICHE Conference Archive - https://mdi-de.baw.de/icheArchiv

    Improved on an improved remote user authentication scheme with key agreement

    Get PDF
    Recently, Kumari et al. pointed out that Chang et al.’s scheme “Untraceable dynamic-identity-based remote user authentication scheme with verifiable password update” not only has several drawbacks, but also does not provide any session key agreement. Hence, they proposed an improved remote user authentication Scheme with key agreement on Chang et al.’s Scheme. After cryptanalysis, they confirm the security properties of the improved scheme. However, we determine that the scheme suffers from both anonymity breach and he smart card loss password guessing attack, which are in the ten basic requirements in a secure identity authentication using smart card, assisted by Liao et al. Therefore, we modify the method to include the desired security functionality, which is significantly important in a user authentication system using smart card

    The role of tool geometry in process damped milling

    Get PDF
    The complex interaction between machining structural systems and the cutting process results in machining instability, so called chatter. In some milling scenarios, process damping is a useful phenomenon that can be exploited to mitigate chatter and hence improve productivity. In the present study, experiments are performed to evaluate the performance of process damped milling considering different tool geometries (edge radius, rake and relief angles and variable helix/pitch). The results clearly indicate that variable helix/pitch angles most significantly increase process damping performance. Additionally, increased cutting edge radius moderately improves process damping performance, while rake and relief angles have a smaller and closely coupled effect

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202
    • …
    corecore