108 research outputs found

    Involvement of KSRP in the post-transcriptional regulation of human iNOS expression-complex interplay of KSRP with TTP and HuR

    Get PDF
    We purified the KH-type splicing regulatory protein (KSRP) as a protein interacting with the 3′-untranslated region (3′-UTR) of the human inducible nitric oxide (iNOS) mRNA. Immunodepletion of KSRP enhanced iNOS 3′-UTR RNA stability in in vitro-degradation assays. In DLD-1 cells overexpressing KSRP cytokine-induced iNOS expression was markedly reduced. In accordance, downregulation of KSRP expression increases iNOS expression by stabilizing iNOS mRNA. Co-immunoprecipitations showed interaction of KSRP with the exosome and tristetraprolin (TTP). To analyze the role of KSRP binding to the 3′-UTR we studied iNOS expression in DLD-1 cells overexpressing a non-binding mutant of KSRP. In these cells, iNOS expression was increased. Mapping of the binding site revealed KSRP interacting with the most 3′-located AU-rich element (ARE) of the human iNOS mRNA. This sequence is also the target for HuR, an iNOS mRNA stabilizing protein. We were able to demonstrate that KSRP and HuR compete for this binding site, and that intracellular binding to the iNOS mRNA was reduced for KSRP and enhanced for HuR after cytokine treatment. Finally, a complex interplay of KSRP with TTP and HuR seems to be essential for iNOS mRNA stabilization after cytokine stimulatio

    Heme Oxygenase-1 Induction and Organic Nitrate Therapy: Beneficial Effects on Endothelial Dysfunction, Nitrate Tolerance, and Vascular Oxidative Stress

    Get PDF
    Organic nitrates are a group of very effective anti-ischemic drugs. They are used for the treatment of patients with stable angina, acute myocardial infarction, and chronic congestive heart failure. A major therapeutic limitation inherent to organic nitrates is the development of tolerance, which occurs during chronic treatment with these agents, and this phenomenon is largely based on induction of oxidative stress with subsequent endothelial dysfunction. We therefore speculated that induction of heme oxygenase-1 (HO-1) could be an efficient strategy to overcome nitrate tolerance and the associated side effects. Indeed, we found that hemin cotreatment prevented the development of nitrate tolerance and vascular oxidative stress in response to chronic nitroglycerin therapy. Vice versa, pentaerithrityl tetranitrate (PETN), a nitrate that was previously reported to be devoid of adverse side effects, displayed tolerance and oxidative stress when the HO-1 pathway was blocked pharmacologically or genetically by using HO-1+/– mice. Recently, we identified activation of Nrf2 and HuR as a principle mechanism of HO-1 induction by PETN. With the present paper, we present and discuss our recent and previous findings on the role of HO-1 for the prevention of nitroglycerin-induced nitrate tolerance and for the beneficial effects of PETN therapy

    Early Callsign Highlighting using Automatic Speech Recognition to Reduce Air Traffic Controller Workload

    Get PDF
    The primary task of an air traffic controller (ATCo) is to issue instructions to pilots. However, the first contact is often initiated by the pilot. It is useful to have a controller assistance system, which could recognize and highlight the spoken callsign as early as possible, directly from the speech data. Therefore, we propose to use an automatic speech recognition (ASR) system to obtain the speech-to-text translation, using which we extract the spoken callsign. As a high callsign recognition performance is required, we use surveillance data, which significantly improves the performance. We obtain callsign recognition error rates of 6.2% and 8.3% for ATCo and pilot utterances, respectively, but can improve to 2.8% and 4.5%, when using information from surveillance dat

    Automatic Speech Analysis Framework for ATC Communication in HAAWAII

    Get PDF
    Over the past years, several SESAR funded exploratory projects focused on bringing speech and language technologies to the Air Traffic Management (ATM) domain and demonstrating their added value through successful applications. Recently ended HAAWAII project developed a generic architecture and framework, which was validated through several tasks such as callsign highlighting, pre-filling radar labels, and readback error detection. The primary goal was to support pilot and air traffic controller communication by deploying Automatic Speech Recognition (ASR) engines. Contextual information (if available) extracted from surveillance data, flight plan data, or previous communication can be exploited via entity boosting to further improve the recognition performance. HAAWAII proposed various design attributes to integrate the ASR engine into the ATM framework, often depending on concrete technical specifics of target air navigation service providers (ANSPs). This paper gives a brief overview and provides an objective assessment of speech processing components developed and integrated into the HAAWAII framework. Specifically, the following tasks are evaluated w.r.t. application domain: (i) speech activity detection, (ii) speaker segmentation and speaker role classification, as well as (iii) ASR. To our best knowledge, HAAWAII framework offers the best performing speech technologies for ATM, reaching high recognition accuracy (i.e., error-correction done by exploiting additional contextual data), robustness (i.e., models developed using large training corpora) and support for rapid domain transfer (i.e., to new ATM sector with minimum investment). Two scenarios provided by ANSPs were used for testing, achieving callsign detection accuracy of about 96% and 95% for NATS and ISAVIA, respectively

    Induction of tolerogenic lung CD4+ T cells by local treatment with a pSTAT-3 and pSTAT-5 inhibitor ameliorated experimental allergic asthma

    Get PDF
    Signal transducer and activator of transcription (STAT)-3 inhibitors play an important role in regulating immune responses. Galiellalactone (GL) is a fungal secondary metabolite known to interfere with the binding of phosphorylated signal transducer and activator of transcription (pSTAT)-3 as well of pSTAT-6 dimers to their target DNA in vitro. Intra nasal delivery of 50 μg GL into the lung of naive Balb/c mice induced FoxP3 expression locally and IL-10 production and IL-12p40 in RNA expression in the airways in vivo. In a murine model of allergic asthma, GL significantly suppressed the cardinal features of asthma, such as airway hyperresponsiveness, eosinophilia and mucus production, after sensitization and subsequent challenge with ovalbumin (OVA). These changes resulted in induction of IL-12p70 and IL-10 production by lung CD11c+ dendritic cells (DCs) accompanied by an increase of IL-3 receptor α chain and indoleamine-2,3-dioxygenase expression in these cells. Furthermore, GL inhibited IL-4 production in T-bet-deficient CD4+ T cells and down-regulated the suppressor of cytokine signaling-3 (SOCS-3), also in the absence of STAT-3 in T cells, in the lung in a murine model of asthma. In addition, we found reduced amounts of pSTAT-5 in the lung of GL-treated mice that correlated with decreased release of IL-2 by lung OVA-specific CD4+ T cells after treatment with GL in vitro also in the absence of T-bet. Thus, GL treatment in vivo and in vitro emerges as a novel therapeutic approach for allergic asthma by modulating lung DC phenotype and function resulting in a protective response via CD4+FoxP3+ regulatory T cells locall

    Customization of Automatic Speech Recognition Engines for Rare Word Detection Without Costly Model Re-Training

    Get PDF
    Thanks to Alexa, Siri or Google Assistant automatic speech recognition (ASR) has changed our daily life during the last decade. Prototypic applications in the air traffic management (ATM) domain are available. Recently pre-filling radar label entries by ASR support has reached the technology readiness level before industrialization (TRL6). However, seldom spoken and airspace related words relevant in the ATM context remain a challenge for sophisticated applications. Open-source ASR toolkits or large pre-trained models for experts - allowing to tailor ASR to new domains - can be exploited with a typical constraint on availability of certain amount of domain specific training data, i.e., typically transcribed speech for adapting acoustic and/or language models. In general, it is sufficient for a "universal" ASR engine to reliably recognize a few hundred words that form the vocabulary of the voice communications between air traffic controllers and pilots. However, for each airport some hundred dependent words that are seldom spoken need to be integrated. These challenging word entities comprise special airline designators and waypoint names like "dexon" or "burok", which only appear in a specific region. When used, they are highly informative and thus require high recognition accuracies. Allowing plug and play customization with a minimum expert manipulation assumes that no additional training is required, i.e., fine-tuning the universal ASR. This paper presents an innovative approach to automatically integrate new specific word entities to the universal ASR system. The recognition rate of these region-specific word entities with respect to the universal ASR increases by a factor of 6

    How to Measure Speech Recognition Performance in the Air Traffic Control Domain? The Word Error Rate is only half of the truth

    Get PDF
    Applying Automatic Speech Recognition (ASR) in the domain of analogue voice communication between air traffic controllers (ATCo) and pilots has more end user requirements than just transforming spoken words into text. It is useless, when word recognition is perfect, as long as the semantic interpretation is wrong. For an ATCo it is of no importance if the words of greeting are correctly recognized. A wrong recognition of a greeting should, however, not disturb the correct recognition of e.g. a “descend” command. Recently, 14 European partners from Air Traffic Management (ATM) domain have agreed on a common set of rules, i.e., an ontology on how to annotate the speech utterance of an ATCo. This paper first extends the ontology to pilot utterances and then compares different ASR implementations on semantic level by introducing command recognition, command recognition error, and command rejection rates. The implementation used in this paper achieves a command recognition rate better than 94% for Prague Approach, even when WER is above 2.5

    Safety Aspects of Supporting Apron Controllers with Automatic Speech Recognition and Understanding Integrated into an Advanced Surface Movement Guidance and Control System

    Get PDF
    The information air traffic controllers (ATCos) communicate via radio telephony is valuable for digital assistants to provide additional safety. Yet, ATCos have to enter this information manually. Assistant-based speech recognition (ABSR) has proven to be a lightweight technology that automatically extracts and successfully feeds the content of ATC communication into digital systems without additional human effort. This article explains how ABSR can be integrated into an advanced surface movement guidance and control system (A-SMGCS). The described validations were performed in the complex apron simulation training environment of Frankfurt Airport with 14 apron controllers in a human-in-the-loop simulation in summer 2022. The integration significantly reduces the workload of controllers and increases safety as well as overall performance. Based on a word error rate of 3.1%, the command recognition rate was 91.8% with a callsign recognition rate of 97.4%. This performance was enabled by the integration of A-SMGCS and ABSR: the command recognition rate improves by more than 15% absolute by considering A-SMGCS data in ABSR

    How Does Pre-trained Wav2Vec2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications

    Full text link
    Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data substantially differs between the pre-training and downstream fine-tuning phases (i.e., domain shift). We target this scenario by analyzing the robustness of Wav2Vec2.0 and XLS-R models on downstream ASR for a completely unseen domain, i.e., air traffic control (ATC) communications. We benchmark the proposed models on four challenging ATC test sets (signal-to-noise ratio varies between 5 to 20 dB). Relative word error rate (WER) reduction between 20% to 40% are obtained in comparison to hybrid-based state-of-the-art ASR baselines by fine-tuning E2E acoustic models with a small fraction of labeled data. We also study the impact of fine-tuning data size on WERs, going from 5 minutes (few-shot) to 15 hours.Comment: This paper has been submitted to Interspeech 202
    corecore