347 research outputs found

    Local martingale difference approach for service selection with dynamic QoS

    Get PDF
    AbstractUsers in Service-oriented architecture (SOA) seek the best Quality of service (QoS) by service selection from the candidates responding in succession. In case the QoS changes dynamically, choosing one service and stop the searching is problematic for a service user who makes the choice online. Lack of accurate knowledge of service distribution, the user is unable to make a good decision. The Local Martingale Difference (LMD) approach is developed in this paper to help users to achieve optimal results, in the sense of probability. The stopping time is proved to be bounded to ensure the existence of an optimal solution first. Then, a global estimation over the time horizon is transformed to a local determination based on current martingale difference to make the algorithm feasible. Independent of any predetermined threshold or manual intervention, LMD enables users to stop around the optimal time, based on the information collected during the stochastic process. Verified to be efficient by comparison with three traditional methods, LMD is adaptable in vast applications with dynamic QoS

    Template-dependent multiple displacement amplification for profiling human circulating RNA

    Get PDF
    Multiple displacement amplification (MDA) is widely used in whole-genome/transcriptome amplification. However, template-independent amplification (TIA) in MDA is a commonly observed phenomenon, particularly when using high concentrations of random hexamer primers and extended incubation times. Here, we demonstrate that the use of random pentamer primers with 5´ ends blocked by a C18 spacer results in MDA solely in a template-dependent manner, a technique we have named tdMDA. Together with an optimized procedure for the removal of residual genomic DNA during RNA extraction, tdMDA was used to profile circulating RNA from 0.2 mL of patient sera. In comparison to regular MDA, tdMDA demonstrated a lack of quantifiable DNA amplification in the negative control, a remarkable reduction of unmapped reads from Illumina sequencing (7 ± 10.9% versus 58.6 ± 39%, P = 0.006), and increased mapping rates of the serum transcriptome (26.9 ± 7.9% versus 5.8 ± 8.2%, P = 3.8 × 10-4). Transcriptome profiles could be used to separate patients with chronic hepatitis C virus (HCV) infection from those with HCV-associated hepatocellular carcinoma (HCC). We conclude that tdMDA should facilitate RNA-based liquid biopsy, as well as other genome studies with biological specimens having ultralow amounts of genetic material. </jats:p

    Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

    Full text link
    In this paper, we present a simple yet surprisingly effective technique to induce "selective amnesia" on a backdoored model. Our approach, called SEAM, has been inspired by the problem of catastrophic forgetting (CF), a long standing issue in continual learning. Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data. We analyzed SEAM by modeling the unlearning process as continual learning and further approximating a DNN using Neural Tangent Kernel for measuring CF. Our analysis shows that our random-labeling approach actually maximizes the CF on an unknown backdoor in the absence of triggered inputs, and also preserves some feature extraction in the network to enable a fast revival of the primary task. We further evaluated SEAM on both image processing and Natural Language Processing tasks, under both data contamination and training manipulation attacks, over thousands of models either trained on popular image datasets or provided by the TrojAI competition. Our experiments show that SEAM vastly outperforms the state-of-the-art unlearning techniques, achieving a high Fidelity (measuring the gap between the accuracy of the primary task and that of the backdoor) within a few minutes (about 30 times faster than training a model from scratch using the MNIST dataset), with only a small amount of clean data (0.1% of training data for TrojAI models)

    LSTM Learning with Bayesian and Gaussian Processing for Anomaly Detection in Industrial IoT

    Get PDF
    The data generated by millions of sensors in Industrial Internet of Things (IIoT) is extremely dynamic, heterogeneous, and large scale. It poses great challenges on the real-time analysis and decision making for anomaly detection in IIoT. In this paper, we propose a LSTM-Gauss-NBayes method, which is a synergy of the long short-term memory neural network (LSTM-NN) and the Gaussian Bayes model for outlier detection in IIoT. In a nutshell, the LSTM-NN builds model on normal time series. It detects outliers by utilising the predictive error for the Gaussian Naive Bayes model. Our method exploits advantages of both LSTM and Gaussian Naive Bayes models, which not only has strong prediction capability of LSTM for future time point data, but also achieves an excellent classification performance of Gaussian Naive Bayes model through the predictive error. Empirical studies demonstrate our solution outperforms the best-known competitors, which is a preferable choice for detecting anomalies

    MQA: Answering the Question via Robotic Manipulation

    Full text link
    In this paper, we propose a novel task -- Manipulation Question Answering (MQA), where the robot is required to find the answer to the question by actively exploring the environment via manipulation. A framework consisting of a QA model and a manipulation model is proposed to solve this problem. For the QA model, we adopt the method of Visual Question Answering (VQA). For the manipulation model, a Deep Q Network (DQN) model is proposed to generate manipulations. By manipulating objects, the robot can continuously explore the bin until the answer to the question is found. Besides, a novel dataset for simulation that contains a variety of object models, complicated scenarios and corresponding question-answer pairs is established. Extensive experiments have been conducted to validate the effectiveness of the proposed framework

    NMR Spectra Denoising with Vandermonde Constraints

    Full text link
    Nuclear magnetic resonance (NMR) spectroscopy serves as an important tool to analyze chemicals and proteins in bioengineering. However, NMR signals are easily contaminated by noise during the data acquisition, which can affect subsequent quantitative analysis. Therefore, denoising NMR signals has been a long-time concern. In this work, we propose an optimization model-based iterative denoising method, CHORD-V, by treating the time-domain NMR signal as damped exponentials and maintaining the exponential signal form with a Vandermonde factorization. Results on both synthetic and realistic NMR data show that CHORD-V has a superior denoising performance over typical Cadzow and rQRd methods, and the state-of-the-art CHORD method. CHORD-V restores low-intensity spectral peaks more accurately, especially when the noise is relatively high.Comment: 10 pages, 9 figure
    • …
    corecore