2 research outputs found
RNAI MEDIATED GENE SILENCING OF EIF3A: A POSSIBLE SOLUTION TO CONTROL BREAST CANCER
Objective: The eukaryotic translational initiation factor 3A (eIF3A) is reported to be over expressed in most breast cancer cells. In the present study, our aim is to suppress the over expression of eIF3A in human breast cancer MCF-7 cell line using gene silencing technique (RNA interference (RNAi)).Methods: The artificial microRNA (amiRNA) targeting eIF3A gene was constructed by incorporating short interference RNA (siRNA) sequences against eIF3A gene into endogenous microRNA30 (miR-30) and cloned into pcDNA3.1 vector. The amiRNA containing plasmid was then transfected into MCF-7 cell line and the expression of eIF3A was examined by RT-PCR. The cytotoxicity of plasmid with amiRNA targeting eIF3A on MCF–7 cells was evaluated by MTT assay.Results: The amiRNA construct significantly inhibited eIF3A gene expression and reduce the cell viability of MCF-7 cell line.Conclusion: The usage of modified endogenous amiRNA in vector based expression system with significant gene silencing efficiency suggests that RNAi based gene silencing method can be considered as one of the effective means to control cancer.Â
Self-supervised learning based knowledge distillation framework for automatic speech recognition for hearing impaired
The use of speech processing applications, particularly speech recognition, has got a lot of attention in recent decades. In recent years, research has focused on using deep learning for speech-related applications. This new branch of machine learning has outperformed others in a range of applications, including voice, and has thus become a particularly appealing research subject. Noise, speaker variability, language variability, vocabulary size, and domain remain one of the most significant research difficulties in speech recognition. We investigated on self-supervised algorithm for the unlabelled data. In recent years, these algorithms have progressed significantly, with their efficacy approaching and supervised pre-training alternatives across a variety of data modalities such as image and video. The purpose of this research is to develop powerful models for audio speech recognition that do not require human annotation. We accomplish this by distilling information from an automatic speech recognition (ASR) model that was trained on a large audio-only corpus. We integrate Connectionist Temporal Classification (CTC) loss, KL divergence loss in distillation technique. We demonstrate that distillation significantly speeds up training. We evaluate our model with evaluation metric Word Error Rate (WER)