18 research outputs found

    Generalized residual vector quantization for large scale data

    Full text link
    Vector quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel vector quantization framework that iteratively minimizes quantization error. First, we provide a detailed review on a relevant vector quantization method named \textit{residual vector quantization} (RVQ). Next, we propose \textit{generalized residual vector quantization} (GRVQ) to further improve over RVQ. Many vector quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201

    TensorIR: An Abstraction for Automatic Tensorized Program Optimization

    Full text link
    Deploying deep learning models on various devices has become an important topic. The wave of hardware specialization brings a diverse set of acceleration primitives for multi-dimensional tensor computations. These new acceleration primitives, along with the emerging machine learning models, bring tremendous engineering challenges. In this paper, we present TensorIR, a compiler abstraction for optimizing programs with these tensor computation primitives. TensorIR generalizes the loop nest representation used in existing machine learning compilers to bring tensor computation as the first-class citizen. Finally, we build an end-to-end framework on top of our abstraction to automatically optimize deep learning models for given tensor computation primitives. Experimental results show that TensorIR compilation automatically uses the tensor computation primitives for given hardware backends and delivers performance that is competitive to state-of-art hand-optimized systems across platforms.Comment: Accepted to ASPLOS 202

    IKKβ Regulates the Repair of DNA Double-Strand Breaks Induced by Ionizing Radiation in MCF-7 Breast Cancer Cells

    Get PDF
    Activation of the IKK-NFκB pathway increases the resistance of cancer cells to ionizing radiation (IR). This effect has been largely attributed to the induction of anti-apoptotic proteins by NFκB. Since efficient repair of DNA double strand breaks (DSBs) is required for the clonogenic survival of irradiated cells, we investigated if activation of the IKK-NFκB pathway also regulates DSB repair to promote cell survival after IR. We found that inhibition of the IKK-NFκB pathway with a specific IKKβ inhibitor significantly reduced the repair of IR-induced DSBs in MCF-7 cells. The repair of DSBs was also significantly inhibited by silencing IKKβ expression with IKKβ shRNA. However, down-regulation of IKKα expression with IKKα shRNA had no significant effect on the repair of IR-induced DSBs. Similar findings were also observed in IKKα and/or IKKβ knockout mouse embryonic fibroblasts (MEFs). More importantly, inhibition of IKKβ with an inhibitor or down-regulation of IKKβ with IKKβ shRNA sensitized MCF-7 cells to IR-induced clonogenic cell death. DSB repair function and resistance to IR were completely restored by IKKβ reconstitution in IKKβ-knockdown MCF-7 cells. These findings demonstrate that IKKβ can regulate the repair of DSBs, a previously undescribed and important IKKβ kinase function; and inhibition of DSB repair may contribute to cance cell radiosensitization induced by IKKβ inhibition. As such, specific inhibition of IKKβ may represents a more effective approach to sensitize cancer cells to radiotherapy

    Tensor Program Optimization with Probabilistic Programs

    Full text link
    Automatic optimization for tensor programs becomes increasingly important as we deploy deep learning in various environments, and efficient optimization relies on a rich search space and effective search. Most existing efforts adopt a search space which lacks the ability to efficiently enable domain experts to grow the search space. This paper introduces MetaSchedule, a domain-specific probabilistic programming language abstraction to construct a rich search space of tensor programs. Our abstraction allows domain experts to analyze the program, and easily propose stochastic choices in a modular way to compose program transformation accordingly. We also build an end-to-end learning-driven framework to find an optimized program for a given search space. Experimental results show that MetaSchedule can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way. Additionally, it empowers domain experts to conveniently grow the search space and modularly enhance the system, which brings 48% speedup on end-to-end deep learning workloads
    corecore