82 research outputs found

    A Literature Review on the Application of Acoustic Emission to Machine Condition Monitoring

    Get PDF
    Acoustic emission (AE) is a common physical phenomenon, in which the strain energy is released in the form of elastic wave when a material is deformed or cracked during the stress process. The condition monitoring based on AE is a relatively new method that aims to use noise/vibration anomalies to detect machine failures. However, some challenges lie ahead of its application. This thesis aims to analyze the literature in the field of AE applications to machine condition monitoring. The principles of AE technology, relevant instruments, machine monitoring and AE signal analysis, and practical examples of AE monitoring applications will be presented. More specifically, challenges, solutions and future direction in solving signal noise and attenuation challenges will be discussed. Through the example of rotating machinery, the characteristics of AE will be explained in detail. This thesis lays the foundation for the actual use of AE to monitor and analyze the state of machinery and provides guideline for future data collection and analysis of AE signals

    AMER: Automatic Behavior Modeling and Interaction Exploration in Recommender System

    Full text link
    User behavior and feature interactions are crucial in deep learning-based recommender systems. There has been a diverse set of behavior modeling and interaction exploration methods in the literature. Nevertheless, the design of task-aware recommender systems still requires feature engineering and architecture engineering from domain experts. In this work, we introduce AMER, namely Automatic behavior Modeling and interaction Exploration in Recommender systems with Neural Architecture Search (NAS). The core contributions of AMER include the three-stage search space and the tailored three-step searching pipeline. In the first step, AMER searches for residual blocks that incorporate commonly used operations in the block-wise search space of stage 1 to model sequential patterns in user behavior. In the second step, it progressively investigates useful low-order and high-order feature interactions in the non-sequential interaction space of stage 2. Finally, an aggregation multi-layer perceptron (MLP) with shortcut connection is selected from flexible dimension settings of stage~3 to combine features extracted from the previous steps. For efficient and effective NAS, AMER employs the one-shot random search in all three steps. Further analysis reveals that AMER's search space could cover most of the representative behavior extraction and interaction investigation methods, which demonstrates the universality of our design. The extensive experimental results over various scenarios reveal that AMER could outperform competitive baselines with elaborate feature engineering and architecture engineering, indicating both effectiveness and robustness of the proposed method

    AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort

    Full text link
    Story visualization aims to generate a series of images that match the story described in texts, and it requires the generated images to satisfy high quality, alignment with the text description, and consistency in character identities. Given the complexity of story visualization, existing methods drastically simplify the problem by considering only a few specific characters and scenarios, or requiring the users to provide per-image control conditions such as sketches. However, these simplifications render these methods incompetent for real applications. To this end, we propose an automated story visualization system that can effectively generate diverse, high-quality, and consistent sets of story images, with minimal human interactions. Specifically, we utilize the comprehension and planning capabilities of large language models for layout planning, and then leverage large-scale text-to-image models to generate sophisticated story images based on the layout. We empirically find that sparse control conditions, such as bounding boxes, are suitable for layout planning, while dense control conditions, e.g., sketches and keypoints, are suitable for generating high-quality image content. To obtain the best of both worlds, we devise a dense condition generation module to transform simple bounding box layouts into sketch or keypoint control conditions for final image generation, which not only improves the image quality but also allows easy and intuitive user interactions. In addition, we propose a simple yet effective method to generate multi-view consistent character images, eliminating the reliance on human labor to collect or draw character images.Comment: 19 page

    AutoAssign+: Automatic Shared Embedding Assignment in Streaming Recommendation

    Full text link
    In the domain of streaming recommender systems, conventional methods for addressing new user IDs or item IDs typically involve assigning initial ID embeddings randomly. However, this practice results in two practical challenges: (i) Items or users with limited interactive data may yield suboptimal prediction performance. (ii) Embedding new IDs or low-frequency IDs necessitates consistently expanding the embedding table, leading to unnecessary memory consumption. In light of these concerns, we introduce a reinforcement learning-driven framework, namely AutoAssign+, that facilitates Automatic Shared Embedding Assignment Plus. To be specific, AutoAssign+ utilizes an Identity Agent as an actor network, which plays a dual role: (i) Representing low-frequency IDs field-wise with a small set of shared embeddings to enhance the embedding initialization, and (ii) Dynamically determining which ID features should be retained or eliminated in the embedding table. The policy of the agent is optimized with the guidance of a critic network. To evaluate the effectiveness of our approach, we perform extensive experiments on three commonly used benchmark datasets. Our experiment results demonstrate that AutoAssign+ is capable of significantly enhancing recommendation performance by mitigating the cold-start problem. Furthermore, our framework yields a reduction in memory usage of approximately 20-30%, verifying its practical effectiveness and efficiency for streaming recommender systems

    Neural Dependencies Emerging from Learning Massive Categories

    Full text link
    This work presents two astonishing findings on neural networks learned for large-scale image classification. 1) Given a well-trained model, the logits predicted for some category can be directly obtained by linearly combining the predictions of a few other categories, which we call \textbf{neural dependency}. 2) Neural dependencies exist not only within a single model, but even between two independently learned models, regardless of their architectures. Towards a theoretical analysis of such phenomena, we demonstrate that identifying neural dependencies is equivalent to solving the Covariance Lasso (CovLasso) regression problem proposed in this paper. Through investigating the properties of the problem solution, we confirm that neural dependency is guaranteed by a redundant logit covariance matrix, which condition is easily met given massive categories, and that neural dependency is highly sparse, implying that one category correlates to only a few others. We further empirically show the potential of neural dependencies in understanding internal data correlations, generalizing models to unseen categories, and improving model robustness with a dependency-derived regularizer. Code for this work will be made publicly available

    Linear Positional Isomer Sorting in Nonporous Adaptive Crystals of a Pillar[5]arene

    Get PDF
    Here we show a new adsorptive separation approach using nonporous adaptive crystals of a pillar[5]­arene. Desolvated perethylated pillar[5]­arene crystals (<b>EtP5</b>α) with a nonporous character selectively adsorb 1-pentene (<b>1-Pe</b>) over its positional isomer 2-pentene (<b>2-Pe</b>), leading to a structural change from <b>EtP5</b>α to <b>1-Pe</b> loaded structure (<b>1-Pe</b>@<b>EtP5</b>). The purity of <b>1-Pe</b> reaches 98.7% in just one cycle and <b>EtP5</b>α can be reused without losing separation performance
    • …
    corecore