2,752 research outputs found

    Tiresias: Online Anomaly Detection for Hierarchical Operational Network Data

    Full text link
    Operational network data, management data such as customer care call logs and equipment system logs, is a very important source of information for network operators to detect problems in their networks. Unfortunately, there is lack of efficient tools to automatically track and detect anomalous events on operational data, causing ISP operators to rely on manual inspection of this data. While anomaly detection has been widely studied in the context of network data, operational data presents several new challenges, including the volatility and sparseness of data, and the need to perform fast detection (complicating application of schemes that require offline processing or large/stable data sets to converge). To address these challenges, we propose Tiresias, an automated approach to locating anomalous events on hierarchical operational data. Tiresias leverages the hierarchical structure of operational data to identify high-impact aggregates (e.g., locations in the network, failure modes) likely to be associated with anomalous events. To accommodate different kinds of operational network data, Tiresias consists of an online detection algorithm with low time and space complexity, while preserving high detection accuracy. We present results from two case studies using operational data collected at a large commercial IP network operated by a Tier-1 ISP: customer care call logs and set-top box crash logs. By comparing with a reference set verified by the ISP's operational group, we validate that Tiresias can achieve >94% accuracy in locating anomalies. Tiresias also discovered several previously unknown anomalies in the ISP's customer care cases, demonstrating its effectiveness

    Facile hydrothermal synthesis and optical limiting properties of TiO 2 -reduced graphene oxide nanocomposites

    Get PDF
    TiO2/reduced graphene oxide (RGO) nanocomposites Gx (RGO titania nanocomposite, x grams tetrabutyl titanate per 0.03 g RGO, x = 0.25, 0.50, 1.00) were prepared by a hydrothermal method: graphene oxide was reduced to RGO in a 2:1 water:ethanol mixture in the presence of varying quantities of tetrabutyl titanate, which deposited as TiO2 on the RGO sheets. The nanocomposites were characterized by a combination of Fourier transform infrared spectroscopy, diffuse reflectance ultraviolet-visible spectroscopy, photoluminescence spectroscopy, Raman spectroscopy, X-ray powder diffraction, X-ray photoelectron spectroscopy and transmission electron microscopy studies. The nanocomposite G0.25 exhibits enhanced nonlinear optical properties compared to its individual components, which is ascribed to a combination of mechanisms. The role of defects and electron/energy transfer in the optical limiting performance of G0.25 was clarified with the help of Raman and photoluminescence spectroscopies. Intensity-dependent switching between reverse saturable absorption and saturable absorption behavior was observed with the G0.50 nanocomposite

    Convection enhanced delivery of light responsive antigen capturing oxygen generators for chemo-phototherapy triggered adaptive immunity

    Get PDF
    Acknowledgments Chi-Hwa Wang is supported by the National Additive Manufacturing Innovation Cluster @ the National University of Singapore. Vishnu Sunil and Teoh Jia Heng greatly appreciate the National University of Singapore Research Scholarship for the funding of their Ph.D. studies at the National University of Singapore.Peer reviewedPostprin

    Transfer Attacks and Defenses for Large Language Models on Coding Tasks

    Full text link
    Modern large language models (LLMs), such as ChatGPT, have demonstrated impressive capabilities for coding tasks including writing and reasoning about code. They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities. However, these previous code models were shown vulnerable to adversarial examples, i.e. small syntactic perturbations that do not change the program's semantics, such as the inclusion of "dead code" through false conditions or the addition of inconsequential print statements, designed to "fool" the models. LLMs can also be vulnerable to the same adversarial perturbations but a detailed study on this concern has been lacking so far. In this paper we aim to investigate the effect of adversarial perturbations on coding tasks with LLMs. In particular, we study the transferability of adversarial examples, generated through white-box attacks on smaller code models, to LLMs. Furthermore, to make the LLMs more robust against such adversaries without incurring the cost of retraining, we propose prompt-based defenses that involve modifying the prompt to include additional information such as examples of adversarially perturbed code and explicit instructions for reversing adversarial perturbations. Our experiments show that adversarial examples obtained with a smaller code model are indeed transferable, weakening the LLMs' performance. The proposed defenses show promise in improving the model's resilience, paving the way to more robust defensive solutions for LLMs in code-related applications
    • ā€¦
    corecore