636 research outputs found
Chemical effects of nuclear recoil
The investigations recorded in this thesis relate to a radiochemical study of the fates of iodine atoms recoiling after undergoing radiative neutron capture. Particular attention has been focussed upon mixtures of alkyl iodides, using the 'quasi-stable' fission product (^129)I (1.72 X l0(^7) y) to fix unequivocally the origin of the recoil iodine found in the products. Techniques have been developed for the synthesis of methyl and propyl iodides labelled with (^129)I on a micro-scale, and mixtures of these with unlabelled alkyl iodides have been irradiated in the Harwell reactor "BEPO". Other irradiations have been made with 14 MeV neutrons from the D+T reaction; with a Ra-ɤ-Be neutron source; and with a 100 curie (^60)Co source. Separation (by gas/liquid chromatography) and measurement techniques have also been perfected. From a study of the distribution of the active iodine between methyl and propyl iodides it seems that the recoiling atom is more likely to appear as methyl than as propyl iodide. In the case of (^130)I recoils this, ratio is about 3:1, for (^128)I it is about 1.8:1 and for (^126)I (produced by n,2n reaction) it is about 2.2:1. The results can only be explained if it is also assumed that about 10% of the neutron captures (in (^129)I) either do not result in bond rupture or lead to an immediate recombination with the residue of the parent molecule. The effect of ɤ-radiation (unavoidably present in neutron irradiations) on the exchange of iodine between methyl and propyl iodides, has been studied with the aid of (^131)I, using a 100 curie (^60)Co source
Implicit Self-supervised Language Representation for Spoken Language Diarization
In a code-switched (CS) scenario, the use of spoken language diarization (LD)
as a pre-possessing system is essential. Further, the use of implicit
frameworks is preferable over the explicit framework, as it can be easily
adapted to deal with low/zero resource languages. Inspired by speaker
diarization (SD) literature, three frameworks based on (1) fixed segmentation,
(2) change point-based segmentation and (3) E2E are proposed to perform LD. The
initial exploration with synthetic TTSF-LD dataset shows, using x-vector as
implicit language representation with appropriate analysis window length ()
can able to achieve at per performance with explicit LD. The best implicit LD
performance of in terms of Jaccard error rate (JER) is achieved by using
the E2E framework. However, considering the E2E framework the performance of
implicit LD degrades to while using with practical Microsoft CS (MSCS)
dataset. The difference in performance is mostly due to the distributional
difference between the monolingual segment duration of secondary language in
the MSCS and TTSF-LD datasets. Moreover, to avoid segment smoothing, the
smaller duration of the monolingual segment suggests the use of a small value
of . At the same time with small , the x-vector representation is unable
to capture the required language discrimination due to the acoustic similarity,
as the same speaker is speaking both languages. Therefore, to resolve the issue
a self-supervised implicit language representation is proposed in this study.
In comparison with the x-vector representation, the proposed representation
provides a relative improvement of and achieved a JER of using
the E2E framework.Comment: Planning to Submit in IEEE-JSTS
Speaker Recognition using Supra-segmental Level Excitation Information
Speaker specific information present in the excitation signal is mostly viewed from sub-segmental, segmental and supra-segmental levels. In this work, the supra-segmental level information is explored for recognizing speakers. Earlier study has shown that, combined use of pitch and epoch strength vectors provides useful supra-segmental information. However, the speaker recognition accuracy achieved by supra-segmental level feature is relatively poor than other levels source information. May be the modulation information present at the supra-segmental level of the excitation signal is not manifested properly in pith and epoch strength vectors. We propose a method to model the supra-segmental level modulation information from residual mel frequency cepstral coefficient (R-MFCC) trajectories. The evidences from R-MFCC trajectories combined with pitch and epoch strength vectors are proposed to represent supra-segmental information. Experimental results show that compared to pitch and epoch strength vectors, the proposed approach provides relatively improved performance. Further, the proposed supra-segmental level information is relatively more complimentary to other levels information
Significance of Vowel Onset Point Information for Speaker Verification
This work demonstrates the significance of information about vowel onset points (VOPs) for speaker verification. VOP is defined as the instant at which the onset of vowel takes place. Vowel-like regions can be identified using VOPs. By production, vowel-like regions have impulse-like excitation and therefore impulse-response of vocal tract system is better manifested in them, and are relatively high signal to noise ratio (SNR) regions. Speaker information extracted from such regions may therefore be more discriminative. Due to this better speaker modeling and reliable testing may be possible using the features extracted from vowel-like regions. It is demonstrated in this work that for clean and matched conditions, relatively less number of frames from vowel-like regions are sufficient for speaker modeling and testing. Alternatively, for degraded and mismatched conditions, vowel-like regions provide better performanc
Quercetin prevents progression of disease in elastase/LPS-exposed mice by negatively regulating MMP expression
Abstract Background Chronic obstructive pulmonary disease (COPD) is characterized by chronic bronchitis, emphysema and irreversible airflow limitation. These changes are thought to be due to oxidative stress and an imbalance of proteases and antiproteases. Quercetin, a plant flavonoid, is a potent antioxidant and anti-inflammatory agent. We hypothesized that quercetin reduces lung inflammation and improves lung function in elastase/lipopolysaccharide (LPS)-exposed mice which show typical features of COPD, including airways inflammation, goblet cell metaplasia, and emphysema. Methods Mice treated with elastase and LPS once a week for 4 weeks were subsequently administered 0.5 mg of quercetin dihydrate or 50% propylene glycol (vehicle) by gavage for 10 days. Lungs were examined for elastance, oxidative stress, inflammation, and matrix metalloproteinase (MMP) activity. Effects of quercetin on MMP transcription and activity were examined in LPS-exposed murine macrophages. Results Quercetin-treated, elastase/LPS-exposed mice showed improved elastic recoil and decreased alveolar chord length compared to vehicle-treated controls. Quercetin-treated mice showed decreased levels of thiobarbituric acid reactive substances, a measure of lipid peroxidation caused by oxidative stress. Quercetin also reduced lung inflammation, goblet cell metaplasia, and mRNA expression of pro-inflammatory cytokines and muc5AC. Quercetin treatment decreased the expression and activity of MMP9 and MMP12 in vivo and in vitro, while increasing expression of the histone deacetylase Sirt-1 and suppressing MMP promoter H4 acetylation. Finally, co-treatment with the Sirt-1 inhibitor sirtinol blocked the effects of quercetin on the lung phenotype. Conclusions Quercetin prevents progression of emphysema in elastase/LPS-treated mice by reducing oxidative stress, lung inflammation and expression of MMP9 and MMP12.http://deepblue.lib.umich.edu/bitstream/2027.42/78260/1/1465-9921-11-131.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78260/2/1465-9921-11-131.pdfPeer Reviewe
Implicit spoken language diarization
Spoken language diarization (LD) and related tasks are mostly explored using
the phonotactic approach. Phonotactic approaches mostly use explicit way of
language modeling, hence requiring intermediate phoneme modeling and
transcribed data. Alternatively, the ability of deep learning approaches to
model temporal dynamics may help for the implicit modeling of language
information through deep embedding vectors. Hence this work initially explores
the available speaker diarization frameworks that capture speaker information
implicitly to perform LD tasks. The performance of the LD system on synthetic
code-switch data using the end-to-end x-vector approach is 6.78% and 7.06%, and
for practical data is 22.50% and 60.38%, in terms of diarization error rate and
Jaccard error rate (JER), respectively. The performance degradation is due to
the data imbalance and resolved to some extent by using pre-trained wave2vec
embeddings that provide a relative improvement of 30.74% in terms of JER
Federated learning framework for prediction based load distribution in 5G network slicing
The 5G technology brings transformative changes across sectors like healthcare, automotive, and entertainment by integrating massive IoT networks and supporting dense device connectivity. Network slicing in 5G further ignites the capability by allowing tailored virtual networks for specific applications, enhancing operational efficiency and user experience across diverse scenarios. In this paper we propose a framework to use Federated Learning (FL) in 5G network slicing to support service assignment. The aim is to optimize the network traffic allocation among various slices. It first predicts the load on each network slice and then the incoming traffic is allocated to a slice which is most suitable and not heavily loaded. The DeepSlice dataset on 5G slicing is horizontally splited into multiple segments to train a federated CNN model which are deployed across multiple clients. The model is analyzed with varying number of clients and parameters such as accuracy and loss are observed. The performance of federated approach is compared with centralized approach of prediction keeping essential hyper parameters unchanged. Outcomes in terms of training and testing is presented for better interpretation of the proposed framework. Observation shows that the federated learning outperform the centralized technique in accuracy as well as loss
Multilingual Audio-Visual Smartphone Dataset and Evaluation
Smartphones have been employed with biometric-based verification systems to provide security in highly sensitive applications. Audio-visual biometrics are getting popular due to their usability, and also it will be challenging to spoof because of their multimodal nature. In this work, we present an audio-visual smartphone dataset captured in five different recent smartphones. This new dataset contains 103 subjects captured in three different sessions considering the different real-world scenarios. Three different languages are acquired in this dataset to include the problem of language dependency of the speaker recognition systems. These unique characteristics of this dataset will pave the way to implement novel state-of-the-art unimodal or audio-visual speaker recognition systems. We also report the performance of the bench-marked biometric verification systems on our dataset. The robustness of biometric algorithms is evaluated towards multiple dependencies like signal noise, device, language and presentation attacks like replay and synthesized signals with extensive experiments. The obtained results raised many concerns about the generalization properties of state-of-the-art biometrics methods in smartphones.publishedVersio
- …
