45 research outputs found
SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series
Time series classification (TSC) has emerged as a critical task in various
domains, and deep neural models have shown superior performance in TSC tasks.
However, these models are vulnerable to adversarial attacks, where subtle
perturbations can significantly impact the prediction results. Existing
adversarial methods often suffer from over-parameterization or random logit
perturbation, hindering their effectiveness. Additionally, increasing the
attack success rate (ASR) typically involves generating more noise, making the
attack more easily detectable. To address these limitations, we propose SWAP, a
novel attacking method for TSC models. SWAP focuses on enhancing the confidence
of the second-ranked logits while minimizing the manipulation of other logits.
This is achieved by minimizing the Kullback-Leibler divergence between the
target logit distribution and the predictive logit distribution. Experimental
results demonstrate that SWAP achieves state-of-the-art performance, with an
ASR exceeding 50% and an 18% increase compared to existing methods.Comment: 10 pages, 8 figure
Modeling Multi-wavelength Pulse Profiles of Millisecond Pulsar PSR B1821-24
PSR B182124 is a solitary millisecond pulsar (MSP) which radiates
multi-wavelength pulsed photons. It has complex radio, X-ray and -ray
pulse profiles with distinct peak phase-separations that challenge the
traditional caustic emission models. Using the single-pole annular gap model
with suitable magnetic inclination angle () and viewing angle
(), we managed to reproduce its pulse profiles of three
wavebands. It is found that the middle radio peak is originated from the core
gap region at high altitudes, and the other two radio peaks are originated from
the annular gap region at relatively low altitudes. Two peaks of both X-ray and
-ray wavebands are fundamentally originated from annular gap region,
while the -ray emission generated from the core gap region contributes
somewhat to the first -ray peak. Precisely reproducing the
multi-wavelength pulse profiles of PSR B182124 enables us to understand
emission regions of distinct wavebands and justify pulsar emission models.Comment: Accepted for publication in Ap
A dynamic prescriptive maintenance model considering system aging and degradation
This paper develops a dynamic maintenance strategy for a system subject to aging and degradation. The influence of degradation level and aging on system failure rate is modeled in an additive way. Based on the observed degradation level at the inspection, repair or replacement is carried out upon the system. Previous researches assume that repair will always lead to an improvement in the health condition of the system. However, in our study, repair reduces the system age but on the other hand, increases the degradation level. Considering the two-fold influence of maintenance actions, we perform reliability analysis on system reliability as a first step. The evolution of system reliability serves as a foundation for establishing the maintenance model. The optimal maintenance strategy is achieved by minimizing the long-run cost rate in terms of the repair cycle. At each inspection, the parameters of the degradation processes are updated with maximum a posteriori estimation when a new observation arrives. The effectiveness of the proposed model is illustrated through a case study of locomotive wheel-sets. The maintenance model considers the influence of degradation and aging on system failure and dynamically determines the optimal inspection time, which is more flexible than traditional stationary maintenance strategies and can provide better performance in the field
Dimension Independent Mixup for Hard Negative Sample in Collaborative Filtering
Collaborative filtering (CF) is a widely employed technique that predicts
user preferences based on past interactions. Negative sampling plays a vital
role in training CF-based models with implicit feedback. In this paper, we
propose a novel perspective based on the sampling area to revisit existing
sampling methods. We point out that current sampling methods mainly focus on
Point-wise or Line-wise sampling, lacking flexibility and leaving a significant
portion of the hard sampling area un-explored. To address this limitation, we
propose Dimension Independent Mixup for Hard Negative Sampling (DINS), which is
the first Area-wise sampling method for training CF-based models. DINS
comprises three modules: Hard Boundary Definition, Dimension Independent Mixup,
and Multi-hop Pooling. Experiments with real-world datasets on both matrix
factorization and graph-based models demonstrate that DINS outperforms other
negative sampling methods, establishing its effectiveness and superiority. Our
work contributes a new perspective, introduces Area-wise sampling, and presents
DINS as a novel approach that achieves state-of-the-art performance for
negative sampling. Our implementations are available in PyTorch
System availability assessment using a parametric Bayesian approach : a case study of balling drums
Assessment of system availability usually uses either an analytical (e.g., Markov/semi-Markov) or a simulation approach (e.g., Monte Carlo simulation-based). However, the former cannot handle complicated state changes and the latter is computationally expensive. Traditional Bayesian approaches may solve these problems; however, because of their computational difficulties, they are not widely applied. The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches have led to the use of the Bayesian inference in a wide variety of fields. This study proposes a new approach to system availability assessment: a parametric Bayesian approach using MCMC, an approach that takes advantages of the analytical and simulation methods. By using this approach, mean time to failure (MTTF) and mean time to repair (MTTR) are treated as distributions instead of being “averaged”, which better reflects reality and compensates for the limitations of simulation data sample size. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate the analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems)
A review on deep learning applications in prognostics and health management
Deep learning has attracted intense interest in Prognostics and Health Management (PHM), because of its enormous representing power, automated feature learning capability and best-in-class performance in solving complex problems. This paper surveys recent advancements in PHM methodologies using deep learning with the aim of identifying research gaps and suggesting further improvements. After a brief introduction to several deep learning models, we review and analyze applications of fault detection, diagnosis and prognosis using deep learning. The survey validates the universal applicability of deep learning to various types of input in PHM, including vibration, imagery, time-series and structured data. It also reveals that deep learning provides a one-fits-all framework for the primary PHM subfields: fault detection uses either reconstruction error or stacks a binary classifier on top of the network to detect anomalies; fault diagnosis typically adds a soft-max layer to perform multi-class classification; prognosis adds a continuous regression layer to predict remaining useful life. The general framework suggests the possibility of transfer learning across PHM applications. The survey reveals some common properties and identifies the research gaps in each PHM subfield. It concludes by summarizing some major challenges and potential opportunities in the domain
Visualizing Graphene Based Sheets by Fluorescence Quenching Microscopy
Graphene based sheets have stimulated great interest due to their superior
mechanical, electrical and thermal properties. A general visualization method
that allows quick observation of these single atomic layers would be highly
desirable as it can greatly facilitate sample evaluation and manipulation, and
provide immediate feedback to improve synthesis and processing strategies. Here
we report that graphene based sheets can be made highly visible under a
fluorescence microscope by quenching the emission from a dye coating, which can
be conveniently removed afterwards by rinsing without disrupting the sheets.
Current imaging techniques for graphene based sheets rely on the use of special
substrates. In contrast, the fluorescence quenching mechanism is no longer
limited by the types of substrates. Graphene, reduced graphene oxide, or even
graphene oxide sheets deposited on arbitrary substrates can now be readily
visualized by eye with good contrast for layer counting. Direct observation of
suspended sheets in solution was also demonstrated. The fluorescence quenching
microscopy offers unprecedented imaging flexibility and could become a general
tool for characterizing graphene based materials.Comment: J. Am. Chem. Soc., Article ASA
NMR Detection of Single-Walled Carbon Nanotubes in Solution
通讯作者地址: Sun,YP(通讯作者),Clemson Univ, Dept Chem, Clemson, SC 29634 USA
地址:
1. Clemson Univ, Dept Chem, Clemson, SC 29634 USA
2. Clemson Univ, Lab Emerging Mat & Technol, Clemson, SC 29634 USAThe detection of nanotube carbons in solution by C-13 NMR is reported. The highly soluble sample was from the functionalization of C-13-enriched single-walled carbon nanotubes (SWNTs) with diamineterminated oligomeric poly(ethylene glycol) (PEG(1500N)). The ferromagnetic impurities due to the residual metal catalysts were removed from the sample via repeated magnetic separation. The nanotube carbon signals are broad but partially resolved into two overlapping peaks, which are tentatively assigned to nanotube carbons on semiconducting (upfield) and metallic (downfield) SWNTs. The solid-state NMR signals of the same sample are similarly resolved. Mechanistic and practical implications of the results are discussed
Open X-Embodiment:Robotic learning datasets and RT-X models
Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train "generalist" X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. The project website is robotics-transformer-x.github.io