106 research outputs found

    What impacts the helpfulness of online multidimensional reviews? A perspective from cross-attribute rating and ranking Inconsistency

    Get PDF
    This paper proposes investigations of the effects of information inconsistency, particularly ranking inconsistency, on the review helpfulness in a multidimensional rating system, based on information diagnosticity and attribution theory. The insight findings of this paper are: (a) The product cross-attribute dispersion has a significant negative impact on review helpfulness, while the overall attribute ranking consistency and the ranking consistency of the product’s best prominent attribute positively impact review helpfulness. (b) The product cross-attribute dispersion negatively impacts the review helpfulness for non-luxury products but it positively impacts that for luxury products, while the cross-attribute rating difference of a single review positively impacts it helpfulness only if the product is non-luxury. (c) The overall attribute ranking consistency significantly impacts the review helpfulness only for luxury products, whereas the ranking consistency of the product\u27s best and worst prominent attributes impact the review helpfulness only for non-luxury products

    A Survey of Geometric Optimization for Deep Learning: From Euclidean Space to Riemannian Manifold

    Full text link
    Although Deep Learning (DL) has achieved success in complex Artificial Intelligence (AI) tasks, it suffers from various notorious problems (e.g., feature redundancy, and vanishing or exploding gradients), since updating parameters in Euclidean space cannot fully exploit the geometric structure of the solution space. As a promising alternative solution, Riemannian-based DL uses geometric optimization to update parameters on Riemannian manifolds and can leverage the underlying geometric information. Accordingly, this article presents a comprehensive survey of applying geometric optimization in DL. At first, this article introduces the basic procedure of the geometric optimization, including various geometric optimizers and some concepts of Riemannian manifold. Subsequently, this article investigates the application of geometric optimization in different DL networks in various AI tasks, e.g., convolution neural network, recurrent neural network, transfer learning, and optimal transport. Additionally, typical public toolboxes that implement optimization on manifold are also discussed. Finally, this article makes a performance comparison between different deep geometric optimization methods under image recognition scenarios.Comment: 41 page

    WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks

    Full text link
    Due to the popularity of Artificial Intelligence (AI) technology, numerous backdoor attacks are designed by adversaries to mislead deep neural network predictions by manipulating training samples and training processes. Although backdoor attacks are effective in various real scenarios, they still suffer from the problems of both low fidelity of poisoned samples and non-negligible transfer in latent space, which make them easily detectable by existing backdoor detection algorithms. To overcome the weakness, this paper proposes a novel frequency-based backdoor attack method named WaveAttack, which obtains image high-frequency features through Discrete Wavelet Transform (DWT) to generate backdoor triggers. Furthermore, we introduce an asymmetric frequency obfuscation method, which can add an adaptive residual in the training and inference stage to improve the impact of triggers and further enhance the effectiveness of WaveAttack. Comprehensive experimental results show that WaveAttack not only achieves higher stealthiness and effectiveness, but also outperforms state-of-the-art (SOTA) backdoor attack methods in the fidelity of images by up to 28.27\% improvement in PSNR, 1.61\% improvement in SSIM, and 70.59\% reduction in IS

    DSAM-GN:Graph Network based on Dynamic Similarity Adjacency Matrices for Vehicle Re-identification

    Full text link
    In recent years, vehicle re-identification (Re-ID) has gained increasing importance in various applications such as assisted driving systems, traffic flow management, and vehicle tracking, due to the growth of intelligent transportation systems. However, the presence of extraneous background information and occlusions can interfere with the learning of discriminative features, leading to significant variations in the same vehicle image across different scenarios. This paper proposes a method, named graph network based on dynamic similarity adjacency matrices (DSAM-GN), which incorporates a novel approach for constructing adjacency matrices to capture spatial relationships of local features and reduce background noise. Specifically, the proposed method divides the extracted vehicle features into different patches as nodes within the graph network. A spatial attention-based similarity adjacency matrix generation (SASAMG) module is employed to compute similarity matrices of nodes, and a dynamic erasure operation is applied to disconnect nodes with low similarity, resulting in similarity adjacency matrices. Finally, the nodes and similarity adjacency matrices are fed into graph networks to extract more discriminative features for vehicle Re-ID. Experimental results on public datasets VeRi-776 and VehicleID demonstrate the effectiveness of the proposed method compared with recent works.Comment: This paper has been accepted by the 20th Pacific Rim International Conference on Artificial Intelligence in 202

    GitFL: Adaptive Asynchronous Federated Learning using Version Control

    Full text link
    As a promising distributed machine learning paradigm that enables collaborative training without compromising data privacy, Federated Learning (FL) has been increasingly used in AIoT (Artificial Intelligence of Things) design. However, due to the lack of efficient management of straggling devices, existing FL methods greatly suffer from the problems of low inference accuracy and long training time. Things become even worse when taking various uncertain factors (e.g., network delays, performance variances caused by process variation) existing in AIoT scenarios into account. To address this issue, this paper proposes a novel asynchronous FL framework named GitFL, whose implementation is inspired by the famous version control system Git. Unlike traditional FL, the cloud server of GitFL maintains a master model (i.e., the global model) together with a set of branch models indicating the trained local models committed by selected devices, where the master model is updated based on both all the pushed branch models and their version information, and only the branch models after the pull operation are dispatched to devices. By using our proposed Reinforcement Learning (RL)-based device selection mechanism, a pulled branch model with an older version will be more likely to be dispatched to a faster and less frequently selected device for the next round of local training. In this way, GitFL enables both effective control of model staleness and adaptive load balance of versioned models among straggling devices, thus avoiding the performance deterioration. Comprehensive experimental results on well-known models and datasets show that, compared with state-of-the-art asynchronous FL methods, GitFL can achieve up to 2.64X training acceleration and 7.88% inference accuracy improvements in various uncertain scenarios

    Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation

    Full text link
    As a distributed machine learning paradigm, Federated Learning (FL) enables large-scale clients to collaboratively train a model without sharing their raw data. However, due to the lack of data auditing for untrusted clients, FL is vulnerable to poisoning attacks, especially backdoor attacks. By using poisoned data for local training or directly changing the model parameters, attackers can easily inject backdoors into the model, which can trigger the model to make misclassification of targeted patterns in images. To address these issues, we propose a novel data-free trigger-generation-based defense approach based on the two characteristics of backdoor attacks: i) triggers are learned faster than normal knowledge, and ii) trigger patterns have a greater effect on image classification than normal class patterns. Our approach generates the images with newly learned knowledge by identifying the differences between the old and new global models, and filters trigger images by evaluating the effect of these generated images. By using these trigger images, our approach eliminates poisoned models to ensure the updated global model is benign. Comprehensive experiments demonstrate that our approach can defend against almost all the existing types of backdoor attacks and outperform all the seven state-of-the-art defense methods with both IID and non-IID scenarios. Especially, our approach can successfully defend against the backdoor attack even when 80\% of the clients are malicious

    Modeling and Verifying Uncertainty-Aware Timing Behaviors using Parametric Logical Time Constraint

    Get PDF
    International audienceThe Clock Constraint Specification Language (CCSL) is a logical time based modeling language to formalize timing behaviors of real-time and embedded systems. However, it cannot capture timing behaviors that contain uncertainties, e.g., uncertainty in execution time and period. This limits the application of the language to real-world systems, as uncertainty often exists in practice due to both internal and external factors. To capture uncertainties in timing behaviors, in this paper we extend CCSL by introducing parameters into constraints. We then propose an approach to transform parametric CCSL constraints into SMT formulas for efficient verification. We apply our approach to an industrial case which is proposed as the FMTV (Formal Methods for Timing Verification) Challenge in 2015, which shows that timing behaviors with uncertainties can be effectively modeled and verified using the parametric CCSL

    Enumeration and Deduction Driven Co-Synthesis of CCSL Specifications Using Reinforcement Learning

    Get PDF
    International audienceThe Clock Constraint Specification Language (CCSL) has become popular for modeling and analyzing timing behaviors of real-time embedded systems. However, it is difficult for requirement engineers to accurately figure out CCSL specifications from natural language-based requirement descriptions. This is mainly because: i) most requirement engineers lack expertise in formal modeling; and ii) few existing tools can be used to facilitate the generation of CCSL specifications. To address these issues, this paper presents a novel approach that combines the merits of both Reinforcement Learning (RL) and deductive techniques in logical reasoning for efficient co-synthesis of CCSL specifications. Specifically, our method leverages RL to enumerate all the feasible solutions to fill the holes of incomplete specifications and deductive techniques to judge the quality of each trial. Our proposed deductive mechanisms are useful for not only pruning enumeration space, but also guiding the enumeration process to reach an optimal solution quickly. Comprehensive experimental results on both well-known benchmarks and complex industrial examples demonstrate the performance and scalability of our method. Compared with the state-of-theart, our approach can drastically reduce the synthesis time by several orders of magnitude while the accuracy of synthesis can be guaranteed

    Quantitative Performance Evaluation of Uncertainty-Aware Hybrid AADL Designs Using Statistical Model Checking

    Get PDF
    International audience— Architecture Analysis and Design Language (AADL) is widely used for the architecture design and analysis of safety-critical real-time systems. Based on the Hybrid annex which supports continuous behavior modeling, Hybrid AADL enables seamless interactions between embedded control systems and continuous physical environments. Although Hybrid AADL is promising in dependability prediction through analyzable architecture development, the worst-case performance analysis of Hybrid AADL designs can easily lead to an overly pessimistic estimation. So far, Hybrid AADL cannot be used to accurately quantify and reason the overall performance of complex systems which interact with external uncertain environments intensively. To address this problem, this paper proposes a statistical model checking based framework that can perform quantitative evaluation of uncertainty-aware Hybrid AADL designs against various performance queries. Our approach extends Hybrid AADL to support the modeling of environment uncertainties. Furthermore, we propose a set of transformation rules that can automatically translate AADL designs together with designers' requirements into Networks of Priced Timed Automata (NPTA) and performance queries, respectively. Comprehensive experimental results on the Movement Authority (MA) scenario of Chinese Train Control System Level 3 (CTCS-3) demonstrate the effectiveness of our approach
    • …
    corecore