554 research outputs found

    Cell models for the study of the association of EIF2AK3/PERK with progressive supranuclear palsy and neurodegeneration

    Get PDF
    A genome-wide association study (GWAS) identified EIF2AK3 as a risk factor for progressive supranuclear palsy (PSP). EIF2AK3 encodes protein kinase R-like endoplasmic reticulum kinase (PERK), which senses unfolded protein accumulation within the endoplasmic reticulum (ER) lumen. PERK kinase activity has been genetically associated with increased PSP risk. The associated single nuclear polymorphism (SNP), rs7571971, is in linkage disequilibrium with coding SNPs of EIF2AK3: rs867529(Ser136Cys), rs13045(Gln166Arg) and rs1805165(Ala704Ser), forming coding haplotypes of three highly conserved residues; Haplotype A (conserved): S136- R166-S704 and Haplotype B (divergent): Cys136-Gln166-Ala704. A previous study showed that the divergent risk Haplotype B (HapB) has increased PERK activity suggesting that this forms the basis of the genetic risk. The polymorphisms could therefore affect either functional domain (or both). Our aim was to investigate if the two major coding haplotypes of PERK impart differences in the activation of PERK either by impaired homodimerization and/or kinase activity. We generated isogenic HEK293 cell lines for Tet-inducible expression of PERK coding haplotypes with a C-terminal myc-tag to discern from endogenous PERK. With Western blot analyses, we demonstrated robust, inducible expression of myc-tagged PERK. Interestingly, with subsequent passages, the freeze-thawed HapB PERK variants alone underwent C-terminal cleavage as evidenced by loss of the myc-tag which resulted not only in increased PERK protein but also reduced levels of activated p-PERK and p-eIF2a. Myc cleaved HapB cells also showed a significant delayed and impaired kinase activity, alongside increased cell death following induction of the UPR. However, we failed to obtain any evidence for a connection between impaired HapB PERK activity and tau aggregation, even with the more aggregation-prone 2N4Rtau (P301L) and additional seeding with brain lysates from the rTg4510 mouse and with inhibition of ubiquitin proteasome system (UPS) with MG132. In summary, we established stable isogenic cell lines for Tet-inducible expression of PERK functional haplotypes. Although the C-terminus myc tag cleavage of the passage 2 HapB cells resulted in reduced PERK activation leading to an impaired UPR, a necessary further control to validate these conclusions would be to include untagged passage 2 HapB cells. This is essential to clarify as to whether the underlying reduced HapB PERK activity following cell passaging is due to an artefact following myc cleavage or due to posttranslational modifications inherent to HapB PERK

    Development and characterisation of flame retardant nanoparticulate bio-based polymer composites

    Get PDF
    PhDSince the discovery of carbon nanotubes (CNTs) and nanoclays, there has been a great deal of research conducted for uses in applications such as: energy storage, molecular electronics, structural composites, biomedical to name but a few. Owing to their unique intrinsic properties and size means that they have an ever growing potential in the consumer and high technology sectors. In recent years the concept of using these as fillers in polymers has shown great potential. One such function is, as flame retardant additives. These possess much better environmental credentials than halogenated based additives as well as only needing to use a small loading content compared to traditional micron sized fillers. The combination of the above make these fillers ideal candidates for polymers and their composites. Especially with regards to natural fibre composites. Owing to environmental awareness and economical considerations, natural fibre reinforced polymer composites seem to present a viable alternative to synthetic fibre reinforced polymer composites such as glass fibres. However, merely substituting synthetic with natural fibres only solves part of the problem. Therefore selecting a suitable material for the matrix is key. Cellulose is both the most common biopolymer and the most common organic compound on Earth. About 33 % of all plant matter is cellulose; i.e. the cellulose content of cotton is 90 % and that of wood is 50 %. However just like their synthetic counterparts, the poor flame retardancy of bio-derived versions restricts its application and development in important fields such as construction and transportation. Abstract -vi- Traditional methods to improve the flame retardancy of polymeric material involve the use of the micron sized inorganic fillers like ammonium polyphosphate (APP) or aluminium trihydroxide (ATH). Imparting flame retardancy with these inorganic fillers is possible but only with relatively high loadings of more than 50 wt. %. This causes detrimental effects to the mechanical properties of the composite and embrittlement. Applying nanofillers can achieve similar if not better flame retarding performances to their micron sized counterparts but at much lower loading levels (<10 wt.%), thus preserving better the characteristics of the unfilled polymer such as good flow, toughness, surface finish and low density. This is the main focus of this study and it will be achieved by using various experimental techniques including the cone calorimeter and the newly developed microcalorimeter. After a comprehensive literature survey (Chapter 2), the experimental part of the thesis starts with a feasibility study of a flame retardant natural reinforced fibre sheet moulding compound (SMC) (Chapter 3). This work demonstrated that with a suitable flame retardant the peak heat release rate can be reduced. Chapter 4 deals with further improving the flame retardancy of the previously used unsaturated polyester resin. The aim is to study any synergistic behaviour by using aluminium trihydroxide in conjunction with ammonium polyphosphate whilst testing in the cone calorimeter. In Chapter 5, nanofillers are used to replace traditional micron sized fillers. In unsaturated polyester, multi-walled carbon nanotubes and sepiolite nanoclay are used together to create a ternary polymer nanocomposite. The microcalorimeter was employed for screening of the heat release rate. This work showed that the ternary nanocomposite showed synergistic behaviour with regards to significantly reducing the peak heat release rate. Abstract -vii- The same nanofillers were utilised in Chapters 6 and 7 but this time in combination with a thermoplastic (polypropylene) and bio-derived polymer (polylactic acid), respectively. In both systems an improved flame retardancy behavior was achieved whist meeting the recyclability objective. Chapter 8 attempts to show how the optimised natural fibre composite would behaviour in a large scale fire test. The ConeTools software package was used to simulate the single burning item test (SBI) and to classify the end product. This is a necessity with regards to commercialising the product for consumer usage. Finally, Chapter 9 is a summary of the work carried out in this research as well as possible future work that should be conducted

    Regularized solutions for terminal problems of parabolic equations.

    Get PDF
    The heat equation with a terminal condition problem is not well-posed in the sense of Hadamard so regularization is needed. In general, partial differential equations (PDE) with terminal conditions are those in which the solution depends uniquely but not continuously on the given condition. In this dissertation, we explore how to find an approximation problem for a nonlinear heat equation which is well-posed. By using a small parameter, we construct an approximation problem and use a modified quasi-boundary value method to regularize a time dependent thermal conductivity heat equation and a quasi-boundary value method to regularize a space dependent thermal conductivity heat equation. Finally we prove, in both cases, the approximation solution converges to the original solution whenever the parameter goes to zero

    A Tutorial on Quantum Master Equations: Tips and tricks for quantum optics, quantum computing and beyond

    Full text link
    Quantum master equations are an invaluable tool to model the dynamics of a plethora of microscopic systems, ranging from quantum optics and quantum information processing, to energy and charge transport, electronic and nuclear spin resonance, photochemistry, and more. This tutorial offers a concise and pedagogical introduction to quantum master equations, accessible to a broad, cross-disciplinary audience. The reader is guided through the basics of quantum dynamics with hands-on examples that build up in complexity. The tutorial covers essential methods like the Lindblad master equation, Redfield relaxation, and Floquet theory, as well as techniques like Suzuki-Trotter expansion and numerical approaches for sparse solvers. These methods are illustrated with code snippets implemented in python and other languages, which can be used as a starting point for generalisation and more sophisticated implementations.Comment: 57 pages, 12 figures, 34 code example

    Critical Evaluation of Existing Theories and Models in Blended Learning in Higher Education

    Get PDF
    AbstractIn Sri Lanka there is a great demand for higher education that the government is finding difficult to fulfill. In addition, graduated students are lacking soft skills and industry needs even though they are very thorough with the theoretical knowledge. Higher education in various countries adopted various technologies to overcome such barriers. Blended learning is one such technology which is vastly used in developed countries in Higher Education.This article reports on a literature review of blended learning models, frameworks, and theories. The study undertook a critical evaluation of blended learning and focused mainly of the design aspect of such models. Following a discussion of findings related to various blended learning models such as Blended Learning Assessment model, Hexagonal e-learning assessment model, time based blended learning model, 3-C model, hybrid online model, it has then outlined highly related theories to these models. Further as findings of this paper, most widely used subsystems of a blended learning programmes are given (example: learner, instructor, content, technology, learner support and institution). In addition, this paper further gives areas to be researched further, based on the dearth of research in some aspect of blended learning such as frameworks supported by education theories. Higher education systems are under permeant development. Although there are successful stories of higher education sector in Sri Lanka, perspectives of global solutions are still missing. Hence, this research intended to provide the researchers the foundation to carry out to find a suitable to blended learning approach to Sri Lanka. Also this study contributes to better understanding of blended learning, by summarizing different models, subsystems, and the research gaps in those models.Keywords: Blended learning, Higher Educatio

    Modelling metabolism in the neonatal brain

    Get PDF
    Acute changes in cerebral blood flow and oxygen delivery directly affect brain tissue metabolism, often leading to severe life-long disabilities or death. These events can occur during birth with dire consequences to the infant. In order to identify and monitor these events in the neonatal brain clinicians often use non-invasive techniques such as near-infrared spectroscopy (NIRS) and magnetic resonance spectroscopy (MRS). However, clinical interpretation of these signals is challenging. This thesis describes a number of mathematical and computational models of cerebral blood flow, oxygenation and metabolism regulation to assist signals integration from multimodal measurements and to investigate brain tissue metabolic activity in neonatal preclinical and clinical studies. The scope of this work is to construct a set of useful computational tools that will illuminate brain tissue and cellular physiology that give rise to changes in clinical measurements, and hence offer information of clinical significance. The models are composed of differential equations and algebraic relations that mimic the network regulating cellular metabolism. They integrate NIRS and MRS measurements that offer insights into oxygenation and a variety of metabolic products such as ATP and pH. These models are thus able to explore the relation between measured signals and the physiology and biochemistry of the brain. The first three models presented in this thesis focus on the piglet brain – a preclinical animal model of the human neonatal brain. Previously published models are extended to simulate intracellular pH and used to investigate hypoxia-ischaemia experiments conducted in piglets, predicting NIRS and MRS measurements. The fourth model is an adaptation of the piglet model to the human term neonate, to investigate data from bedside NIRS monitoring of patients with birth asphyxia. Finally a previously published, simpler adult model is adapted to the preterm neonate, simulating data from functional response studies and a functional NIRS study in neonates using a visual stimulus

    Genetic Evaluation of ESBL E. coli Urinary Isolates in Otago

    Get PDF
    The incidence of infections with extended spectrum beta-lactamase (ESBL)-producing E. coli in New Zealand is increasing. ESBL E. coli most commonly cause urinary tract infections and are seen in both community and hospital patients. The reason for the increasing incidence of ESBL E. coli infections is unknown. In this study, 66 urinary ESBL E. coli isolates from Otago in 2015 were fully genetically characterised to understand the mechanisms of transmission. The ESBL gene, E. coli sequence types, plasmid types, and genetic context (e.g. insertion sequences) of ESBL genes were determined by a combination of whole genome and plasmid sequencing. A bioinformatic pipeline was constructed for the hybrid assembly of Illumina short reads and MinION long reads of ESBL-encoding plasmids. Significant diversity of E. coli strains, plasmids, and the genetic context of ESBL genes was seen. This suggests multiple introductions of ESBL resistance genes or resistant bacterial strains accounts for the increased incidence of ESBL E. coli in this low prevalence area. Future studies should investigate modes of transmission of ESBL E. coli and the genes they encode in Otago

    Improving Machine Learning Robustness via Adversarial Training

    Full text link
    As Machine Learning (ML) is increasingly used in solving various tasks in real-world applications, it is crucial to ensure that ML algorithms are robust to any potential worst-case noises, adversarial attacks, and highly unusual situations when they are designed. Studying ML robustness will significantly help in the design of ML algorithms. In this paper, we investigate ML robustness using adversarial training in centralized and decentralized environments, where ML training and testing are conducted in one or multiple computers. In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples generated by Fast Gradient Sign Method and DeepFool, respectively. Comparing to existing studies, these results demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data, respectively, where CIFAR-10 is used in this research. In the IID data case, our experimental results demonstrate that we can achieve such a robust accuracy that it is comparable to the one obtained in the centralized environment. Moreover, in the non-IID data case, the natural accuracy drops from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in C&W and Projected Gradient Descent (PGD) attacks, compared to the IID data case, respectively. We further propose an IID data-sharing approach, which allows for increasing the natural accuracy to 85.04% and the robust accuracy from 57% to 72% in C&W attacks and from 59% to 67% in PGD attacks

    Feature selection and artifact removal in sleep stage classification

    Get PDF
    The use of Electroencephalograms (EEG) are essential to the analysis of sleep disorders in patients. With the use of electroencephalograms, electro-oculograms (EOG), and electromyograms (EMG), doctors and EEG technician can make conclusions about the sleep patterns of patients. In particular, the classification of the sleep data into various stages, such as NREM I-IV, REM, Awake, is extremely important. The EEG signal itself is highly sensitive to physiological and non-physiological artifacts. Trained human experts can accommodate for these artifacts while they are analyzing the EEG signal. However, if some of these artifacts are removed prior to analysis, their job will be become easier. Furthermore, one of the biggest motivations, of our team's research is the construction of a portable device that can analyze the sleep data as they are being collected. For this task, the sleep data must be analyzed completely automatically in order to make the classifications. The research presented in this thesis concerns itself with the denoising and the feature selection aspects of the teams' goals. Since humans are able to process artifacts and ignore them prior to classification, an automated system should have the same capabilities or close to them. As such, the denoising step is performed to condition the data prior to any other stages of the sleep stage neoclassicisms. As mentioned before, the denoising step, by itself, is useful to human EEG technicians as well. The denoising step in this research mainly looks at EOG artifacts and artifacts isolated to a single EEG channel, such as electrode pop artifacts. The first two algorithms uses Wavelets exclusively (BWDA and WDA), while the third algorithm is a mixture of Wavelets and In- dependent Component Analysis (IDA). With the BWDA algorithm, determining consistent thresholds proved to be a difficult task. With the WDA algorithm, the performance was better, since the selection of the thresholds was more straight-forward and since there was more control over defining the duration of the artifacts. The IDA algorithm performed inferior to the WDA algorithm. This could have been due to the small number of measurement channels or the automated sub-classifier used to select the denoised EEG signal from the set of ICA demixed signals. The feature selection stage is extremely important as it selects the most pertinent features to make a particular classification. Without such a step, the classifier will have to process useless data, which might result in a poorer classification. Furthermore, unnecessary features will take up valuable computer cycles as well. In a portable device, due to battery consumption, wasting computer cycles is not an option. The research presented in this thesis shows the importance of a systematic feature selection step in EEG classification. The feature selection step produced excellent results with a maximum use of just 5 features. During automated classification, this is extremely important as the automated classifier will only have to calculate 5 features for each given epoch
    • â€Ļ
    corecore