1,131 research outputs found

    Electronic Cigarettes: Neurological Effects on Murine Offspring and the Response of Neuronal Cells

    Full text link
    University of Technology Sydney. Faculty of Science.Electronic cigarettes (e-cigarettes) are battery-powered devices that convert an oily-flavoured liquid into an aerosol. E-cigarette liquids contain propylene glycol, glycerin, flavouring and varying concentrations of nicotine. Due to aggressive marketing, e-cigarettes are attractive to a number of vulnerable groups such as young people and pregnant women. It is perceived within these populations that e-cigarettes are a safer alternative to smoking tobacco cigarettes although there is limited evidence proving this. In this thesis, Chapter 1 provides an extensive review on what is currently known about e-cigarettes within the literature. Chapter 2 describes a mouse pregnancy model of e-cigarette exposure and examines the offspring at three time-points; postnatal day 1 (right after birth), postnatal day 20 (right after weaning) and at week 13 (adulthood). Chapter 3 describes a pregnancy model of switching from tobacco cigarette to e-cigarette exposure during pregnancy. Behavioural assessments using the novel object recognition and the elevated plus maze tests were conducted in both Chapter 2 and 3 to determine changes to short-term memory, anxiety and exploration. In addition, epigenetic changes investigating DNA methylation and epigenetic gene expression on offspring brain were investigated. Finally, Chapter 4 investigated the effects of e-cigarette condensate on differentiated neuroblastoma cells (diff-SHSY5Y), microglial (BV2) cells and human brain endothelial cells (HBEC) in monoculture and in co-culture using a blood brain barrier (BBB) model. The results showed that offspring from mothers exposed to e-cigarette aerosols with and without nicotine had significant changes to memory, anxiety, hyperactivity, DNA methylation and epigenetic gene expression compared to normal offspring. Continuous tobacco cigarette exposure showed significant effects on offspring behaviour and epigenetics, however, switching to e-cigarettes during pregnancy reduced some of these changes but not all to normal levels. In the cell culture experiments, e-cigarette exposure on diff-SHSY5Y, BV2 and HBEC showed reduced cell-viability and an increase in oxidative stress in monoculture. In a co-culture model of the BBB, significant epigenetic gene changes were observed in diff-SHSY5Y cells after treatment with conditioned media from BV2 cells. All of these results are summarised in Chapter 5. In summary, the experiments showed that neurological changes including behavioural and epigenetics occurred in the offspring after maternal e-cigarette exposure. The experiments showed that this may be due to a direct effect of e-cigarette constituents on neuronal cells, or through an indirect inflammatory response involving microglia. Overall, this study concluded that e-cigarettes are not safe to be used during pregnancy

    Improving the Performance of Online Neural Transducer Models

    Full text link
    Having a sequence-to-sequence model which can operate in an online fashion is important for streaming applications such as Voice Search. Neural transducer is a streaming sequence-to-sequence model, but has shown a significant degradation in performance compared to non-streaming models such as Listen, Attend and Spell (LAS). In this paper, we present various improvements to NT. Specifically, we look at increasing the window over which NT computes attention, mainly by looking backwards in time so the model still remains online. In addition, we explore initializing a NT model from a LAS-trained model so that it is guided with a better alignment. Finally, we explore including stronger language models such as using wordpiece models, and applying an external LM during the beam search. On a Voice Search task, we find with these improvements we can get NT to match the performance of LAS

    Improved scheme for generation of vibrational trio coherent states of a trapped ion

    Full text link
    We improve a previously proposed scheme (Phys. Rev. A 66 (2002) 065401) for generating vibrational trio coherent states of a trapped ion. The improved version is shown to gain a double advantage: (i) it uses only five, instead of eight, lasers and (ii) the generation process can be made remarkably faster.Comment: Latex, 4 pages, 4 figure

    The Obstacles Facing India on Its Journey to Becoming a Developed Country

    Get PDF
    Among the developing countries in the world, India marks itself as being one of the fastest growing economies. India, the seventh-largest country in the world, borders the Indian Ocean to the south, the Arabian Sea to the south-west, the Bay of Bengal to the south-east, and shares borders with Pakistan, China, Bhutan, Burma, and Bangladesh. India is recognized by a long history of commercial and cultural wealth. India’s political and economic history has led it to become one of the fastest developing countries in the world. Despite being a newly industrializing nation, India continues to face challenges of over population, poor water and sanitation, and low adult literacy rates. These problems are addressed in this report along with the policy recommendations for India to overcome these challenges

    Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model

    Full text link
    Sequence-to-sequence models provide a simple and elegant solution for building speech recognition systems by folding separate components of a typical system, namely acoustic (AM), pronunciation (PM) and language (LM) models into a single neural network. In this work, we look at one such sequence-to-sequence model, namely listen, attend and spell (LAS), and explore the possibility of training a single model to serve different English dialects, which simplifies the process of training multi-dialect systems without the need for separate AM, PM and LMs for each dialect. We show that simply pooling the data from all dialects into one LAS model falls behind the performance of a model fine-tuned on each dialect. We then look at incorporating dialect-specific information into the model, both by modifying the training targets by inserting the dialect symbol at the end of the original grapheme sequence and also feeding a 1-hot representation of the dialect information into all layers of the model. Experimental results on seven English dialects show that our proposed system is effective in modeling dialect variations within a single LAS model, outperforming a LAS model trained individually on each of the seven dialects by 3.1 ~ 16.5% relative.Comment: submitted to ICASSP 201

    No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models

    Full text link
    For decades, context-dependent phonemes have been the dominant sub-word unit for conventional acoustic modeling systems. This status quo has begun to be challenged recently by end-to-end models which seek to combine acoustic, pronunciation, and language model components into a single neural network. Such systems, which typically predict graphemes or words, simplify the recognition process since they remove the need for a separate expert-curated pronunciation lexicon to map from phoneme-based units to words. However, there has been little previous work comparing phoneme-based versus grapheme-based sub-word units in the end-to-end modeling framework, to determine whether the gains from such approaches are primarily due to the new probabilistic model, or from the joint learning of the various components with grapheme-based units. In this work, we conduct detailed experiments which are aimed at quantifying the value of phoneme-based pronunciation lexica in the context of end-to-end models. We examine phoneme-based end-to-end models, which are contrasted against grapheme-based ones on a large vocabulary English Voice-search task, where we find that graphemes do indeed outperform phonemes. We also compare grapheme and phoneme-based approaches on a multi-dialect English task, which once again confirm the superiority of graphemes, greatly simplifying the system for recognizing multiple dialects
    corecore