191 research outputs found

    V<sub>H</sub> replacement in rearranged immunoglobulin genes

    Get PDF
    Examples suggesting that all or part of the V&lt;sub&gt;H&lt;/sub&gt; segment of a rearranged V&lt;sub&gt;H&lt;/sub&gt;DJ&lt;sub&gt;H&lt;/sub&gt; may be replaced by all or part of another V&lt;sub&gt;H&lt;/sub&gt; have been appearing since the 1980s. Evidence has been presented of two rather different types of replacement. One of these has gained acceptance and has now been clearly demonstrated to occur. The other, proposed more recently, has not yet gained general acceptance because the same effect can be produced by polymerase chain reaction artefact. We review both types of replacement including a critical examination of evidence for the latter. The first type involves RAG proteins and recombination signal sequences (RSS) and occurs in immature B cells. The second was also thought to be brought about by RAG proteins and RSS. However, it has been reported in hypermutating cells which are not thought to express RAG proteins but in which activation-induced cytidine deaminase (AID) has recently been shown to initiate homologous recombination. Re-examination of the published sequences reveals AID target sites in V&lt;sub&gt;H&lt;/sub&gt;-V&lt;sub&gt;H&lt;/sub&gt; junction regions and examples that resemble gene conversion

    Repeat Transanal Advancement Flap Repair: Impact on the Overall Healing Rate of High Transsphincteric Fistulas and on Fecal Continence

    Get PDF
    PURPOSE: Transanal advancement flap repair (TAFR) has been advocated as the treatment of choice for transsphincteric fistulas passing through the upper or middle third of the external anal sphincter. It is not clear whether previous attempts at repair adversely affect the outcome of TAFR. The purpose of the present study was to evaluate the success rate of a repeat TAFR and to assess the impact of such a second procedure on the overall healing rate of high transsphincteric fistulas and on fecal continence. METHODS: Between January 2001 and January 2005, a consecutive series of 87 patients (62 males; median age, 49 (range, 27-73) years) underwent TAFR. Median follow-up was 15 (range, 2-50) months. Patients in whom the initial operation failed were offered two further treatment options: a second flap repair or a long-term indwelling seton drainage. Twenty-six patients (male:female ratio, 5:2; median age, 51 (range, 31-72) years) preferred a repeat repair. Continence status was evaluated before and after the procedures by using the Rockwood Faecal Incontinence Severity Index (RFISI). RESULTS: The healing rate after the first TAFR was 67 percent. Of the 29 patients in whom the initial procedure failed, 26 underwent a repeat TAFR. The healing rate after this second procedure was 69 percent, resulting in an overall success rate of 90 percent. Both before and after the first attempt of TAFR, the median RFISI was 7 (range, 0-34). In patients who underwent a second TAFR, the median RFISI before and after this procedure was 9 (range, 0-34) and 8 (range, 0-34), respectively. None of these changes were statistically significant. CONCLUSIONS: Repeat TAFR increases the overall healing rate of high transsphincteric fistulas from 67 percent after one attempt to 90 percent after two attempts without a deteriorating effect on fecal continence

    Detection of regulator genes and eQTLs in gene networks

    Full text link
    Genetic differences between individuals associated to quantitative phenotypic traits, including disease states, are usually found in non-coding genomic regions. These genetic variants are often also associated to differences in expression levels of nearby genes (they are "expression quantitative trait loci" or eQTLs for short) and presumably play a gene regulatory role, affecting the status of molecular networks of interacting genes, proteins and metabolites. Computational systems biology approaches to reconstruct causal gene networks from large-scale omics data have therefore become essential to understand the structure of networks controlled by eQTLs together with other regulatory genes, and to generate detailed hypotheses about the molecular mechanisms that lead from genotype to phenotype. Here we review the main analytical methods and softwares to identify eQTLs and their associated genes, to reconstruct co-expression networks and modules, to reconstruct causal Bayesian gene and module networks, and to validate predicted networks in silico.Comment: minor revision with typos corrected; review article; 24 pages, 2 figure

    Classification of heterogeneous microarray data by maximum entropy kernel

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is a large amount of microarray data accumulating in public databases, providing various data waiting to be analyzed jointly. Powerful kernel-based methods are commonly used in microarray analyses with support vector machines (SVMs) to approach a wide range of classification problems. However, the standard vectorial data kernel family (linear, RBF, etc.) that takes vectorial data as input, often fails in prediction if the data come from different platforms or laboratories, due to the low gene overlaps or consistencies between the different datasets.</p> <p>Results</p> <p>We introduce a new type of kernel called maximum entropy (ME) kernel, which has no pre-defined function but is generated by kernel entropy maximization with sample distance matrices as constraints, into the field of SVM classification of microarray data. We assessed the performance of the ME kernel with three different data: heterogeneous kidney carcinoma, noise-introduced leukemia, and heterogeneous oral cavity carcinoma metastasis data. The results clearly show that the ME kernel is very robust for heterogeneous data containing missing values and high-noise, and gives higher prediction accuracies than the standard kernels, namely, linear, polynomial and RBF.</p> <p>Conclusion</p> <p>The results demonstrate its utility in effectively analyzing promiscuous microarray data of rare specimens, e.g., minor diseases or species, that present difficulty in compiling homogeneous data in a single laboratory.</p

    Health impact assessment of particulate pollution in Tallinn using fine spatial resolution and modeling techniques

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Health impact assessments (HIA) use information on exposure, baseline mortality/morbidity and exposure-response functions from epidemiological studies in order to quantify the health impacts of existing situations and/or alternative scenarios. The aim of this study was to improve HIA methods for air pollution studies in situations where exposures can be estimated using GIS with high spatial resolution and dispersion modeling approaches.</p> <p>Methods</p> <p>Tallinn was divided into 84 sections according to neighborhoods, with a total population of approx. 390 000 persons. Actual baseline rates for total mortality and hospitalization with cardiovascular and respiratory diagnosis were identified. The exposure to fine particles (PM<sub>2.5</sub>) from local emissions was defined as the modeled annual levels. The model validation and morbidity assessment were based on 2006 PM<sub>10 </sub>or PM<sub>2.5 </sub>levels at 3 monitoring stations. The exposure-response coefficients used were for total mortality 6.2% (95% CI 1.6–11%) per 10 μg/m<sup>3 </sup>increase of annual mean PM<sub>2.5 </sub>concentration and for the assessment of respiratory and cardiovascular hospitalizations 1.14% (95% CI 0.62–1.67%) and 0.73% (95% CI 0.47–0.93%) per 10 μg/m<sup>3 </sup>increase of PM<sub>10</sub>. The direct costs related to morbidity were calculated according to hospital treatment expenses in 2005 and the cost of premature deaths using the concept of Value of Life Year (VOLY).</p> <p>Results</p> <p>The annual population-weighted-modeled exposure to locally emitted PM<sub>2.5 </sub>in Tallinn was 11.6 μg/m<sup>3</sup>. Our analysis showed that it corresponds to 296 (95% CI 76528) premature deaths resulting in 3859 (95% CI 10236636) Years of Life Lost (YLL) per year. The average decrease in life-expectancy at birth per resident of Tallinn was estimated to be 0.64 (95% CI 0.17–1.10) years. While in the polluted city centre this may reach 1.17 years, in the least polluted neighborhoods it remains between 0.1 and 0.3 years. When dividing the YLL by the number of premature deaths, the decrease in life expectancy among the actual cases is around 13 years. As for the morbidity, the short-term effects of air pollution were estimated to result in an additional 71 (95% CI 43–104) respiratory and 204 (95% CI 131–260) cardiovascular hospitalizations per year. The biggest external costs are related to the long-term effects on mortality: this is on average €150 (95% CI 40–260) million annually. In comparison, the costs of short-term air-pollution driven hospitalizations are small €0.3 (95% CI 0.2–0.4) million.</p> <p>Conclusion</p> <p>Sectioning the city for analysis and using GIS systems can help to improve the accuracy of air pollution health impact estimations, especially in study areas with poor air pollution monitoring data but available dispersion models.</p

    Cdx4 and Menin Co-Regulate Hoxa9 Expression in Hematopoietic Cells

    Get PDF
    BACKGROUND: Transcription factor Cdx4 and transcriptional coregulator menin are essential for Hoxa9 expression and normal hematopoiesis. However, the precise mechanism underlying Hoxa9 regulation is not clear. METHODS AND FINDINGS: Here, we show that the expression level of Hoxa9 is correlated with the location of increased trimethylated histone 3 lysine 4 (H3K4M3). The active and repressive histone modifications co-exist along the Hoxa9 regulatory region. We further demonstrate that both Cdx4 and menin bind to the same regulatory region at the Hoxa9 locus in vivo, and co-activate the reporter gene driven by the Hoxa9 cis-elements that contain Cdx4 binding sites. Ablation of menin abrogates Cdx4 access to the chromatin target and significantly reduces both active and repressive histone H3 modifications in the Hoxa9 locus. CONCLUSION: These results suggest a functional link among Cdx4, menin and histone modifications in Hoxa9 regulation in hematopoietic cells

    SlimPLS: A Method for Feature Selection in Gene Expression-Based Disease Classification

    Get PDF
    A major challenge in biomedical studies in recent years has been the classification of gene expression profiles into categories, such as cases and controls. This is done by first training a classifier by using a labeled training set containing labeled samples from the two populations, and then using that classifier to predict the labels of new samples. Such predictions have recently been shown to improve the diagnosis and treatment selection practices for several diseases. This procedure is complicated, however, by the high dimensionality if the data. While microarrays can measure the levels of thousands of genes per sample, case-control microarray studies usually involve no more than several dozen samples. Standard classifiers do not work well in these situations where the number of features (gene expression levels measured in these microarrays) far exceeds the number of samples. Selecting only the features that are most relevant for discriminating between the two categories can help construct better classifiers, in terms of both accuracy and efficiency. In this work we developed a novel method for multivariate feature selection based on the Partial Least Squares algorithm. We compared the method's variants with common feature selection techniques across a large number of real case-control datasets, using several classifiers. We demonstrate the advantages of the method and the preferable combinations of classifier and feature selection technique

    Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery

    Get PDF
    Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision
    • …
    corecore