88 research outputs found

    Induction of synthetic lethality in mutant KRAS cells for non-small cell lung cancers chemoprevention and therapy

    Get PDF
    Lung cancer is the leading cause of cancer death in both men and women in the United States and worldwide. Despite improvement in treatment strategies, the 5-year survival rate of lung cancer patients remains low. Thus, effective chemoprevention and treatment approaches are sorely needed. Mutations and activation of KRAS occur frequently in tobacco users and the early stage of development of non-small cell lung cancers (NSCLC). So they are thought to be the primary driver for lung carcinogenesis. My work showed that KRAS mutations and activations modulated the expression of TNF-related apoptosis-inducing ligand (TRAIL) receptors by up-regulating death receptors and down-regulating decoy receptors. In addition, we showed that KRAS suppresses cellular FADD-like IL-1β-converting enzyme (FLICE)-like inhibitory protein (c-FLIP) expression through activation of ERK/MAPK-mediated activation of c-MYC which means the mutant KRAS cells could be specifically targeted via TRAIL induced apoptosis. The expression level of Inhibitors of Apoptosis Proteins (IAPs) in mutant KRAS cells is usually high which could be overcome by the second mitochondria-derived activator of caspases (Smac) mimetic. So the combination of TRAIL and Smac mimetic induced the synthetic lethal reaction specifically in the mutant-KRAS cells but not in normal lung cells and wild-type KRAS lung cancer cells. Therefore, a synthetic lethal interaction among TRAIL, Smac mimetic and KRAS mutations could be used as an approach for chemoprevention and treatment of NSCLC with KRAS mutations. Further data in animal experiments showed that short-term, intermittent treatment with TRAIL and Smac mimetic induced apoptosis in mutant KRAS cells and reduced tumor burden in a KRAS-induced pre-malignancy model and mutant KRAS NSCLC xenograft models. These results show the great potential benefit of a selective therapeutic approach for the chemoprevention and treatment of NSCLC with KRAS mutations

    Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach

    Full text link
    Traffic state prediction in a transportation network is paramount for effective traffic operations and management, as well as informed user and system-level decision-making. However, long-term traffic prediction (beyond 30 minutes into the future) remains challenging in current research. In this work, we integrate the spatio-temporal dependencies in the transportation network from network modeling, together with the graph convolutional network (GCN) and graph attention network (GAT). To further tackle the dramatic computation and memory cost caused by the giant model size (i.e., number of weights) caused by multiple cascaded layers, we propose sparse training to mitigate the training cost, while preserving the prediction accuracy. It is a process of training using a fixed number of nonzero weights in each layer in each iteration. We consider the problem of long-term traffic speed forecasting for a real large-scale transportation network data from the California Department of Transportation (Caltrans) Performance Measurement System (PeMS). Experimental results show that the proposed GCN-STGT and GAT-STGT models achieve low prediction errors on short-, mid- and long-term prediction horizons, of 15, 30 and 45 minutes in duration, respectively. Using our sparse training, we could train from scratch with high sparsity (e.g., up to 90%), equivalent to 10 times floating point operations per second (FLOPs) reduction on computational cost using the same epochs as dense training, and arrive at a model with very small accuracy loss compared with the original dense trainin

    YouNICon: YouTube's CommuNIty of Conspiracy Videos

    Full text link
    Conspiracy theories are widely propagated on social media. Among various social media services, YouTube is one of the most influential sources of news and entertainment. This paper seeks to develop a dataset, YOUNICON, to enable researchers to perform conspiracy theory detection as well as classification of videos with conspiracy theories into different topics. YOUNICON is a dataset with a large collection of videos from suspicious channels that were identified to contain conspiracy theories in a previous study (Ledwich and Zaitsev 2020). Overall, YOUNICON will enable researchers to study trends in conspiracy theories and understand how individuals can interact with the conspiracy theory producing community or channel. Our data is available at: https://doi.org/10.5281/zenodo.7466262

    Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off

    Full text link
    Over-parameterization of deep neural networks (DNNs) has shown high prediction accuracy for many applications. Although effective, the large number of parameters hinders its popularity on resource-limited devices and has an outsize environmental impact. Sparse training (using a fixed number of nonzero weights in each iteration) could significantly mitigate the training costs by reducing the model size. However, existing sparse training methods mainly use either random-based or greedy-based drop-and-grow strategies, resulting in local minimal and low accuracy. In this work, we consider the dynamic sparse training as a sparse connectivity search problem and design an exploitation and exploration acquisition function to escape from local optima and saddle points. We further design an acquisition function and provide the theoretical guarantees for the proposed method and clarify its convergence property. Experimental results show that sparse models (up to 98\% sparsity) obtained by our proposed method outperform the SOTA sparse training methods on a wide variety of deep learning tasks. On VGG-19 / CIFAR-100, ResNet-50 / CIFAR-10, ResNet-50 / CIFAR-100, our method has even higher accuracy than dense models. On ResNet-50 / ImageNet, the proposed method has up to 8.2\% accuracy improvement compared to SOTA sparse training methods

    ARHI (DIRAS 3), an Imprinted Tumor Suppressor Gene, Binds to Importins, and Blocks Nuclear Translocation of Stat3

    Get PDF
    ARHI (DIRAS3) is an imprinted tumor suppressor gene whose expression is lost in the majority of breast and ovarian cancers. Unlike its homologs Ras and Rap, ARHI functions as a tumor suppressor. Our previous study showed that ARHI can interact with transcription activator Stat3 and inhibit its nuclear translocation in human breast and ovarian cancer cells. To identify proteins that interact with ARHI in nuclear translocation, we have performed proteomic analysis and identified several importins that can associate with ARHI. To further explore this novel finding, we have purified 10 GST-importin fusion proteins (importin 7, 8, 13, b1, a1, a3, a5, a6, a7 as well as mutant a1). Using a GST-pull down assay, we found that ARHI can bind strongly to most importins; however, its binding is significantly reduced with an importin a1 mutant which contains an altered nuclear localization signal (NLS) domain. In addition, an ARHI N-terminal deletion mutant (NTD) exhibits much less binding to all importins than does wild type ARHI ARHI and NTD proteins were purified and tested for their ability to inhibit nuclear importation of proteins in HeLa cells. ARHI protein inhibits interaction of Ran-importin complexes with GFP fusion proteins that contain an NLS domain and a beta-like import receptor binding domain, blocking their nuclear localization. Addition of ARHI also blocked nuclear localization of phosphorylated Stat3β. By GST-pull down assays, we found that ARHI could compete for Ran-importins binding. Thus, ARHI-induced disruption of importin binding to cargo proteins including Stat3 could serve as an important regulatory mechanism that contributes to the tumor suppressor function of ARHI

    Suppression of Cross-Polarization of the Microstrip Integrated Balun-Fed Printed Dipole Antenna

    Get PDF
    The high cross-polarization of the microstrip integrated balun-fed printed dipole antenna cannot meet the demands of many engineering applications. This kind of antennas has high cross-polarization levels (about −20 dB). And we find that the high cross-polarization radiation is mainly produced by the microstrip integrated balun rather than the dipole itself. The very limited method to lower the cross-polarization level of this kind of antennas is to reduce the substrate thickness. In this paper, to improve the low cross-polarized performance, firstly, an equivalent model is presented to analyze the cross-polarization radiation. Secondly, a novel structure with low cross-polarization is proposed. The microstrip integrated balun is enclosed by a center slotted cavity. The E-field of the microstrip integrated balun is transformed parallel to the dipole arms by the slot, so the radiation of the cross-polarized component is suppressed. Measured results show that this structure can achieve a bandwidth wider than 40% while reducing the cross-polarization level to less than −35 dB within the frequency band

    Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

    Full text link
    Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven operation and sparse activities. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices. Existing works adopt weight pruning to reduce SNN model size and accelerate inference. However, these methods mainly focus on how to obtain a sparse model for efficient inference, rather than training efficiency. To overcome these drawbacks, in this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity. Specifically, we design a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the LTH training cost on VGG-16 on CIFAR-10

    PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference

    Full text link
    The rapid growth and deployment of deep learning (DL) has witnessed emerging privacy and security concerns. To mitigate these issues, secure multi-party computation (MPC) has been discussed, to enable the privacy-preserving DL computation. In practice, they often come at very high computation and communication overhead, and potentially prohibit their popularity in large scale systems. Two orthogonal research trends have attracted enormous interests in addressing the energy efficiency in secure deep learning, i.e., overhead reduction of MPC comparison protocol, and hardware acceleration. However, they either achieve a low reduction ratio and suffer from high latency due to limited computation and communication saving, or are power-hungry as existing works mainly focus on general computing platforms such as CPUs and GPUs. In this work, as the first attempt, we develop a systematic framework, PolyMPCNet, of joint overhead reduction of MPC comparison protocol and hardware acceleration, by integrating hardware latency of the cryptographic building block into the DNN loss function to achieve high energy efficiency, accuracy, and security guarantee. Instead of heuristically checking the model sensitivity after a DNN is well-trained (through deleting or dropping some non-polynomial operators), our key design principle is to em enforce exactly what is assumed in the DNN design -- training a DNN that is both hardware efficient and secure, while escaping the local minima and saddle points and maintaining high accuracy. More specifically, we propose a straight through polynomial activation initialization method for cryptographic hardware friendly trainable polynomial activation function to replace the expensive 2P-ReLU operator. We develop a cryptographic hardware scheduler and the corresponding performance model for Field Programmable Gate Arrays (FPGA) platform

    AutoReP: Automatic ReLU Replacement for Fast Private Network Inference

    Full text link
    The growth of the Machine-Learning-As-A-Service (MLaaS) market has highlighted clients' data privacy and security issues. Private inference (PI) techniques using cryptographic primitives offer a solution but often have high computation and communication costs, particularly with non-linear operators like ReLU. Many attempts to reduce ReLU operations exist, but they may need heuristic threshold selection or cause substantial accuracy loss. This work introduces AutoReP, a gradient-based approach to lessen non-linear operators and alleviate these issues. It automates the selection of ReLU and polynomial functions to speed up PI applications and introduces distribution-aware polynomial approximation (DaPa) to maintain model expressivity while accurately approximating ReLUs. Our experimental results demonstrate significant accuracy improvements of 6.12% (94.31%, 12.9K ReLU budget, CIFAR-10), 8.39% (74.92%, 12.9K ReLU budget, CIFAR-100), and 9.45% (63.69%, 55K ReLU budget, Tiny-ImageNet) over current state-of-the-art methods, e.g., SNL. Morever, AutoReP is applied to EfficientNet-B2 on ImageNet dataset, and achieved 75.55% accuracy with 176.1 times ReLU budget reduction.Comment: ICCV 2023 accepeted publicatio

    Implementation of the CMOS MEMS Condenser Microphone with Corrugated Metal Diaphragm and Silicon Back-Plate

    Get PDF
    This study reports a CMOS-MEMS condenser microphone implemented using the standard thin film stacking of 0.35 μm UMC CMOS 3.3/5.0 V logic process, and followed by post-CMOS micromachining steps without introducing any special materials. The corrugated diaphragm for the microphone is designed and implemented using the metal layer to reduce the influence of thin film residual stresses. Moreover, a silicon substrate is employed to increase the stiffness of the back-plate. Measurements show the sensitivity of microphone is −42 ± 3 dBV/Pa at 1 kHz (the reference sound-level is 94 dB) under 6 V pumping voltage, the frequency response is 100 Hz–10 kHz, and the S/N ratio >55 dB. It also has low power consumption of less than 200 μA, and low distortion of less than 1% (referred to 100 dB)
    • …
    corecore