964 research outputs found

    Characterization of the human NLR protein NLRC5

    Get PDF
    Nucleotide-binding domain, leucine-rich repeat (NLR)-containing proteins play important roles in the innate immune system as intracellular pattern recognition receptors. The most prominent members, NOD1, NOD2 and NLRP3 have been extensively shown to trigger NF-kB activation or IL-1b/IL-18 processing upon pathogen infection, respectively. Nonetheless, other functions beyond pathogen recognition have also been reported for some NLR proteins. Here we report the first characterization of the human NLR protein NLRC5. NLRC5 lacks the typical N-terminal CARD or PYRIN domain of most NLR proteins, but harbours a death do-main fold effector domain with yet unknown function. Interestingly, NACHT and LRR domain alignments reveal close homology to the MHC class II transcriptional activator (CIITA), which is responsible for the transcriptional induction of MHC class II molecules, and moderate homology to NOD1 and NOD2. In the first part of this study, we addressed the expression and regulation of NLRC5 in different tissues and cell lines. We detected NLRC5 expression primarily in cells and tissues of the immune system, including CD4+ and CD8+ T cells and spleen, lymph node and bone marrow. Furthermore, we were able to induce a TLR3-dependent NLRC5 induction upon stimula-tion with the dsRNA-mimic poly(I:C) as well as an TLR3-independent NLRC5 induction using a Sendai Virus (SeV)-based infection model. In line with that, we revealed a role for NLRC5 in type I interferon (IFN) response against RNA viruses. Moreover, we adapted an infection model of primary human dermal fibroblasts (hFibr) with Sendai Virus (SeV), depicting a distinct role for NLRC5 in anti-viral immune processes. In the second part, we investigated the role of NLRC5 in MHC class I promoter activa-tion. Similar to MHC class II promoter activation by the non-DNA binding coactivator CIITA, we were able to obtain a clear role for NLRC5 in MHC class I expression and identified the domains, which are important for nuclear translocation and MHC class I promoter activation. We further analysed the involvement of a DNA-binding complex, the so-called enhanceosome, in NLRC5-dependent MHC class I expression, which is pivotal for CIITA-dependent MHC class II expression. Finally, we generated NLRC5-CIITA-chimeric proteins to decipher the NLRC5-dependent MHC class I and CIITA-dependent class II activation in more detail. Domain swapping of the N-terminal effector domains revealed, that the NLRC5 N-terminal effector domain fused to the C-terminus of CIITA is sufficient to activate both MHC class I and MHC class II expression. Taken together, in this study we identified a role for NLRC5 in anti-viral immune responses and further contributed to the understanding of NLRC5-mediated MHC class I expression

    Web services for transcriptomics

    Get PDF
    Transcriptomics is part of a family of disciplines focussing on high throughput molecular biology experiments. In the case of transcriptomics, scientists study the expression of genes resulting in transcripts. These transcripts can either perform a biological function themselves or function as messenger molecules containing a copy of the genetic code, which can be used by the ribosomes as templates to synthesise proteins. Over the past decade microarray technology has become the dominant technology for performing high throughput gene expression experiments. A microarray contains short sequences (oligos or probes), which are the reverse complement of fragments of the targets (transcripts or sequences derived thereof). When genes are expressed, their transcripts (or sequences derived thereof) can hybridise to these probes. Many thousand copies of a probe are immobilised in a small region on a support. These regions are called spots and a typical microarray contains thousands or sometimes even more than a million spots. When the transcripts (or sequences derived thereof) are fluorescently labelled and it is known which spots are located where on the support, a fluorescent signal in a certain region represents expression of a certain gene. For interpretation of microarray data it is essential to make sure the oligos are specific for their targets. Hence for proper probe design one needs to know all transcripts that may be expressed and how well they can hybridise with candidate oligos. Therefore oligo design requires: 1. A complete reference genome assembly. 2. Complete annotation of the genome to know which parts may be transcribed. 3. Insight in the amount of natural variation in the genomes of different individuals. 4. Knowledge on how experimental conditions influence the ability of probes to hybridise with certain transcripts. Unfortunately such complete information does not exist, but many microarrays were designed based on incomplete data nevertheless. This can lead to a variety of problems including cross-hybridisation (non-specific binding), erroneously annotated and therefore misleading probes, missing probes and orphan probes. Fortunately the amount of information on genes and their transcripts increases rapidly. Therefore, it is possible to improve the reliability of microarray data analysis by regular updates of the probe annotation using updated databases for genomes and their annotation. Several tools have been developed for this purpose, but these either used simplistic annotation strategies or did not support our species and/ or microarray platforms of interest. Therefore, we developed OligoRAP (Oligo Re- Annotation Pipeline), which is described in chapter 2. OligoRAP was designed to take advantage of amongst others annotation provided by Ensembl, which is the largest genome annotation effort in the world. Thereby OligoRAP supports most of the major animal model organisms including farm animals like chicken and cow. In addition to support for our species and array platforms of interest OligoRAP employs a new annotation strategy combining information from genome and transcript databases in a non-redundant way to get the most complete annotation possible. In chapter 3 we compared annotation generated with 3 oligo annotation pipelines including OligoRAP and investigated the effect on functional analysis of a microarray experiment involving chickens infected with Eimeria bacteria. As an example of functional analysis we investigated if up- or downregulated genes were enriched for Terms from the Gene Ontology (GO). We discovered that small differences in annotation strategy could lead to alarmingly large differences in enriched GO terms. Therefore it is important to know, which annotation strategy works best, but it was not possible to assess this due to the lack of a good reference or benchmark dataset. There are a few limited studies investigating the hybridisation potential of imperfect alignments of oligos with potential targets, but in general such data is scarce. In addition it is difficult to compare these studies due to differences in experimental setup including different hybridisation temperatures and different probe lengths. As result we cannot determine exact thresholds for the alignments of oligos with non-targets to prevent cross-hybridisation, but from these different studies we can get an idea of the range for the thresholds that would be required for optimal target specificity. Note that in these studies experimental conditions were first optimised for an optimal signal to noise ratio for hybridisation of oligos with targets. Then these conditions were used to determine the thresholds for alignments of oligos with non-targets to prevent cross-hybridisation. Chapter 4 describes a parameter sweep using OligoRAP to explore hybridisation potential thresholds from a different perspective. Given the mouse genome thresholds were determined for the largest amount of gene specific probes. Using those thresholds we then determined thresholds for optimal signal to noise ratios. Unfortunately the annotation-based thresholds we found did not fall within the range of experimentally determined thresholds; in fact they were not even close. Hence what was experimentally determined to be optimal for the technology was not in sync with what was determined to be optimal for the mouse genome. Further research will be required to determine whether microarray technology can be modified in such a way that it is better suited for gene expression experiments. The requirement of a priori information on possible targets and the lack of sufficient knowledge on how experimental conditions influence hybridisation potential can be considered the Achiles’ heels of microarray technology. Chapter 5 is a collection of 3 application notes describing other tools that can aid in analysis of transcriptomics data. Firstly, RShell, which is a plugin for the Taverna workbench allowing users to execute statistical computations remotely on R-servers. Secondly, MADMAX services, which provide quality control and normalisation of microarray data for AffyMetrix arrays. Finally, GeneIlluminator, which is a tool to disambiguate gene symbols allowing researchers to specifically retrieve literature for their genes of interest even if the gene symbols for those genes had many synonyms and homonyms. Web services High throughput experiments like those performed in transcriptomics usually require subsequent analysis with many different tools to make biological sense of the data. Installing all these tools on a single, local computer and making them compatible so users can build analysis pipelines can be very cumbersome. Therefore distributed analysis strategies have been explored extensively over the past decades. In a distributed system providers offer remote access to tools and data via the Internet allowing users to create pipelines from modules from all over the globe. Chapter 1 provides an overview of the evolution of web services, which represent the latest breed in technology for creating distributed systems. The major advantage of web services over older technology is that web services are programming language independent, Internet communication protocol independent and operating system independent. Therefore web services are very flexible and most of them are firewall-proof. Web services play a major role in the remaining chapters of this thesis: OligoRAP is a workflow entirely made from web services and the tools described in chapter 5 all provide remote programmatic access via web service interfaces. Although web services can be used to build relatively complex workflows like OligoRAP, a lack of mainly de facto standards and of user-friendly clients has limited the use of web services to bioinformaticians. A semantic web where biologists can easily link web services into complex workflows does n <br/

    Arousal and Valence Prediction in Spontaneous Emotional Speech: Felt versus Perceived Emotion

    Get PDF
    In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the TNO-GAMING corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out. \u

    Modeling the Cognitive Task Load and Performance of Naval Operators

    Get PDF
    Abstract. Operators on naval ships have to act in dynamic, critical and highdemand task environments. For these environments, a cognitive task load (CTL) model has been proposed as foundation of three operator support functions: adaptive task allocation, cognitive aids and resource feedback. This paper presents the construction of such a model as a Bayesian network with probability relationships between CTL and performance. The network is trained and tested with two datasets: operator performance with an adaptive user interface in a lab-setting and operator performance on a high-tech sailing ship. The “Naïve Bayesian network ” tuned out to be the best choice, providing performance estimations with 86 % and 74 % accuracy for respectively the lab and ship data. Overall, the resulting model nicely generalizes over the two datasets. It will be used to estimate operator performance under momentary CTL-conditions, and to set the thresholds of the load-mitigation strategies for the three support functions
    corecore