2 research outputs found

    Intravenous Formulation of HET0016 Decreased Human Glioblastoma Growth and Implicated Survival Benefit in Rat Xenograft Models

    Get PDF
    Glioblastoma (GBM) is a hypervascular primary brain tumor with poor prognosis. HET0016 is a selective CYP450 inhibitor, which has been shown to inhibit angiogenesis and tumor growth. Therefore, to explore novel treatments, we have generated an improved intravenous (IV) formulation of HET0016 with HPßCD and tested in animal models of human and syngeneic GBM. Administration of a single IV dose resulted in 7-fold higher levels of HET0016 in plasma and 3.6-fold higher levels in tumor at 60 min than that in IP route. IV treatment with HPßCD-HET0016 decreased tumor growth, and altered vascular kinetics in early and late treatment groups (p \u3c 0.05). Similar growth inhibition was observed in syngeneic GL261 GBM (p \u3c 0.05). Survival studies using patient derived xenografts of GBM811, showed prolonged survival to 26 weeks in animals treated with focal radiation, in combination with HET0016 and TMZ (p \u3c 0.05). We observed reduced expression of markers of cell proliferation (Ki-67), decreased neovascularization (laminin and αSMA), in addition to inflammation and angiogenesis markers in the treatment group (p \u3c 0.05). Our results indicate that HPßCD-HET0016 is effective in inhibiting tumor growth through decreasing proliferation, and neovascularization. Furthermore, HPßCD-HET0016 significantly prolonged survival in PDX GBM811 model

    Low-Area and Low-Power VLSI Architectures for Long Short-Term Memory Networks

    No full text
    Long short-term memory (LSTM) networks are extensively used in various sequential learning tasks, including speech recognition. Their significance in real-world applications has prompted the demand for cost-effective and power-efficient designs. This paper introduces LSTM architectures based on distributed arithmetic (DA), utilizing circulant and block-circulant matrix-vector multiplications (MVMs) for network compression. The quantized weights-oriented approach for training circulant and block-circulant matrices is considered. By formulating fixed-point circulant/block-circulant MVMs, we explore the impact of kernel size on accuracy. Our DA-based approach employs shared full and partial methods of add-store/store-add followed by a select unit to realize an MVM. It is then coupled with a multi-partial strategy to reduce complexity for larger kernel sizes. Further complexity reduction is achieved by optimizing decoders of multiple select units. Pipelining in add-store enhances speed at the expense of a few pipelined registers. The results of the field-programmable gate array showcase the superiority of our proposed architectures based on the partial store-add method, delivering reductions of 98.71% in DSP slices, 33.59% in slice look-up tables, 13.43% in flip-flops, and 29.76% in power compared to the state-of-the-art.</p
    corecore