3,715 research outputs found
The Retinome – Defining a reference transcriptome of the adult mammalian retina/retinal pigment epithelium
BACKGROUND: The mammalian retina is a valuable model system to study neuronal biology in health and disease. To obtain insight into intrinsic processes of the retina, great efforts are directed towards the identification and characterization of transcripts with functional relevance to this tissue. RESULTS: With the goal to assemble a first genome-wide reference transcriptome of the adult mammalian retina, referred to as the retinome, we have extracted 13,037 non-redundant annotated genes from nearly 500,000 published datasets on redundant retina/retinal pigment epithelium (RPE) transcripts. The data were generated from 27 independent studies employing a wide range of molecular and biocomputational approaches. Comparison to known retina-/RPE-specific pathways and established retinal gene networks suggest that the reference retinome may represent up to 90% of the retinal transcripts. We show that the distribution of retinal genes along the chromosomes is not random but exhibits a higher order organization closely following the previously observed clustering of genes with increased expression. CONCLUSION: The genome wide retinome map offers a rational basis for selecting suggestive candidate genes for hereditary as well as complex retinal diseases facilitating elaborate studies into normal and pathological pathways. To make this unique resource freely available we have built a database providing a query interface to the reference retinome [1]
Electrochemical inhibition biosensor array for rapid detection of water pollutions based on bacteria immobilized on screen-printed gold electrodes
This work reports on the development of a bacteria-based inhibition biosensor array for detection of different types of pollutions, i.e. heavy metal ions (Zn 2+ ), pesticides (DDVP) and petro-chemicals (pentane), in water. The biosensor chip for preliminary identification of the above water pollutants is based on three types of bacteria (Escherichia coli, Shewanella oneidensis and Methylosinus trichosporium OB3b) immobilized on screen-printed gold electrode surface via poly L-lysine which provides strong adhesion of bacterial monolayer to the electrode without losses of biological function. A series of optical measurements and DC electrochemical measurements were carried out on these three types of bacteria species immobilized on modified screen printed gold electrodes as well as on the bacteria in solution samples. The principle of electrochemical detection of pollutants is based on the facts that live bacteria adsorbed (or immobilized) on the electrode surface appeared to be insulating and thus reducing the electrochemical current, while the bacteria damaged by pollutants are less insulating. The results obtained demonstrated different effects of the three different types of analytes studied, e.g. Zn 2+ , DDVP, and pentane, on the three bacteria used. The findings are encouraging for application of a pattern recognition approach for identification pollutants which may lead to development of a novel, simple, and cost-effective bio-sensing array for preliminary detection of environmental pollutants in water
Molecular evolution and functional divergence of the bestrophin protein family
<p>Abstract</p> <p>Background</p> <p>Mutations in human bestrophin 1 are associated with at least three autosomal-dominant macular dystrophies including Best disease, adult onset vitelliform macular dystrophy and autosomal dominant vitreo-retinochoroidopathy. The protein is integral to the membrane and is likely involved in Ca<sup>2+</sup>-dependent transport of chloride ions across cellular membranes. Bestrophin 1 together with its three homologues forms a phylogenetically highly conserved family of proteins.</p> <p>Results</p> <p>A bioinformatics study was performed to investigate the phylogenetic relationship among the bestrophin family members and to statistically evaluate sequence conservation and functional divergence. Phylogenetic tree assembly with all available eukaryotic bestrophin sequences suggests gene duplication events in the lineage leading to the vertebrates. A common N-terminal topology which includes four highly conserved transmembrane domains is shared by the members of the four paralogous groups of vertebrate bestrophins and has been constrained by purifying selection. Pairwise comparison shows that altered functional constraints have occurred at specific amino acid positions after phylogenetic diversification of the paralogues. Most notably, significant functional divergence was found between bestrophin 4 and the other family members, as well as between bestrophin 2 and bestrophin 3. Site-specific profiles were established by posterior probability analysis revealing significantly divergent clusters mainly in two hydrophilic loops and a region immediately adjacent to the last predicted transmembrane domain. Strikingly, codons 279 and 347 of human bestrophin 4 reveal high divergence when compared to the paralogous positions strongly indicating the functional importance of these residues for the bestrophin 4 protein. None of the functionally divergent amino acids were found to reside within obvious sequences patterns or motifs.</p> <p>Conclusion</p> <p>Our study highlights the molecular evolution of the bestrophin family of transmembrane proteins and indicates amino acid residues likely relevant for distinct functional properties of the paralogues. These findings may provide a starting point for further experimental verifications.</p
Rapid generation of chromosome-specific alphoid DNA probes using the polymerase chain reaction
Non-isotopic in situ hybridization of chromosome-specific alphoid DNA probes has become a potent tool in the study of numerical aberrations of specific human chromosomes at all stages of the cell cycle. In this paper, we describe approaches for the rapid generation of such probes using the polymerase chain reaction (PCR), and demonstrate their chromosome specificity by fluorescence in situ hybridization to normal human metaphase spreads and interphase nuclei. Oligonucleotide primers for conserved regions of the alpha satellite monomer were used to generate chromosome-specific DNA probes from somatic hybrid cells containing various human chromosomes, and from DNA libraries from sorted human chromosomes. Oligonucleotide primers for chromosome-specific regions of the alpha satellite monomer were used to generate specific DNA probes for the pericentromeric heterochromatin of human chromosomes 1, 6, 7, 17 and X directly from human genomic DNA
The influence of electromagnetic fields from two commercially available water-treatment devices on calcium carbonate precipitation
CaCO3 precipitation profiles, tracked by absorbance at 350 nm, showing accelerated precipitation upon exposure of the parent solutions to a pulsed electromagnetic field (PEMF) from a commercially available device.</p
Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK Biobank participants.
BACKGROUND: Identifying people at risk of cardiovascular diseases (CVD) is a cornerstone of preventative cardiology. Risk prediction models currently recommended by clinical guidelines are typically based on a limited number of predictors with sub-optimal performance across all patient groups. Data-driven techniques based on machine learning (ML) might improve the performance of risk predictions by agnostically discovering novel risk predictors and learning the complex interactions between them. We tested (1) whether ML techniques based on a state-of-the-art automated ML framework (AutoPrognosis) could improve CVD risk prediction compared to traditional approaches, and (2) whether considering non-traditional variables could increase the accuracy of CVD risk predictions. METHODS AND FINDINGS: Using data on 423,604 participants without CVD at baseline in UK Biobank, we developed a ML-based model for predicting CVD risk based on 473 available variables. Our ML-based model was derived using AutoPrognosis, an algorithmic tool that automatically selects and tunes ensembles of ML modeling pipelines (comprising data imputation, feature processing, classification and calibration algorithms). We compared our model with a well-established risk prediction algorithm based on conventional CVD risk factors (Framingham score), a Cox proportional hazards (PH) model based on familiar risk factors (i.e, age, gender, smoking status, systolic blood pressure, history of diabetes, reception of treatments for hypertension and body mass index), and a Cox PH model based on all of the 473 available variables. Predictive performances were assessed using area under the receiver operating characteristic curve (AUC-ROC). Overall, our AutoPrognosis model improved risk prediction (AUC-ROC: 0.774, 95% CI: 0.768-0.780) compared to Framingham score (AUC-ROC: 0.724, 95% CI: 0.720-0.728, p < 0.001), Cox PH model with conventional risk factors (AUC-ROC: 0.734, 95% CI: 0.729-0.739, p < 0.001), and Cox PH model with all UK Biobank variables (AUC-ROC: 0.758, 95% CI: 0.753-0.763, p < 0.001). Out of 4,801 CVD cases recorded within 5 years of baseline, AutoPrognosis was able to correctly predict 368 more cases compared to the Framingham score. Our AutoPrognosis model included predictors that are not usually considered in existing risk prediction models, such as the individuals' usual walking pace and their self-reported overall health rating. Furthermore, our model improved risk prediction in potentially relevant sub-populations, such as in individuals with history of diabetes. We also highlight the relative benefits accrued from including more information into a predictive model (information gain) as compared to the benefits of using more complex models (modeling gain). CONCLUSIONS: Our AutoPrognosis model improves the accuracy of CVD risk prediction in the UK Biobank population. This approach performs well in traditionally poorly served patient subgroups. Additionally, AutoPrognosis uncovered novel predictors for CVD disease that may now be tested in prospective studies. We found that the "information gain" achieved by considering more risk factors in the predictive model was significantly higher than the "modeling gain" achieved by adopting complex predictive models
Design of a novel quantitative PCR (QPCR)-based protocol for genotyping mice carrying the neuroprotective Wallerian degeneration slow (Wlds) gene
<p>Abstract</p> <p>Background</p> <p>Mice carrying the spontaneous genetic mutation known as Wallerian degeneration slow (<it>Wld</it><sup><it>s</it></sup>) have a unique neuroprotective phenotype, where axonal and synaptic compartments of neurons are protected from degeneration following a wide variety of physical, toxic and inherited disease-inducing stimuli. This remarkable phenotype has been shown to delay onset and progression in several mouse models of neurodegenerative disease, suggesting that <it>Wld</it><sup><it>s</it></sup>-mediated neuroprotection may assist in the identification of novel therapeutic targets. As a result, cross-breeding of <it>Wld</it><sup><it>s </it></sup>mice with mouse models of neurodegenerative diseases is used increasingly to understand the roles of axon and synapse degeneration in disease. However, the phenotype shows strong gene-dose dependence so it is important to distinguish offspring that are homozygous or heterozygous for the mutation. Since the <it>Wld</it><sup><it>s </it></sup>mutation comprises a triplication of a region already present in the mouse genome, the most stringent way to quantify the number of mutant <it>Wld</it><sup><it>s </it></sup>alleles is using copy number. Current approaches to genotype <it>Wld</it><sup><it>s </it></sup>mice are based on either Southern blots or pulsed field gel electrophoresis, neither of which are as rapid or efficient as quantitative PCR (QPCR).</p> <p>Results</p> <p>We have developed a rapid, robust and efficient genotyping method for <it>Wld</it><sup><it>s </it></sup>using QPCR. This approach differentiates, based on copy number, homozygous and heterozygous <it>Wld</it><sup><it>s </it></sup>mice from wild-type mice and each other. We show that this approach can be used to genotype mice carrying the spontaneous <it>Wld</it><sup><it>s </it></sup>mutation as well as animals expressing the <it>Wld</it><sup><it>s </it></sup>transgene.</p> <p>Conclusion</p> <p>We have developed a QPCR genotyping method that permits rapid and effective genotyping of <it>Wld</it><sup><it>s </it></sup>copy number. This technique will be of particular benefit in studies where <it>Wld</it><sup><it>s </it></sup>mice are cross-bred with other mouse models of neurodegenerative disease in order to understand the neuroprotective processes conferred by the <it>Wld</it><sup><it>s </it></sup>mutation.</p
Unsupervised feature selection for noisy data
Feature selection techniques are enormously applied in a variety of data analysis tasks in order to reduce the dimensionality. According to the type of learning, feature selection algorithms are categorized to: supervised or unsupervised. In unsupervised learning scenarios, selecting features is a much harder problem, due to the lack of class labels that would facilitate the search for relevant features. The selecting feature difficulty is amplified when the data is corrupted by different noises. Almost all traditional unsupervised feature selection methods are not robust against the noise in samples. These approaches do not have any explicit mechanism for detaching and isolating the noise thus they can not produce an optimal feature subset. In this article, we propose an unsupervised approach for feature selection on noisy data, called Robust Independent Feature Selection (RIFS). Specifically, we choose feature subset that contains most of the underlying information, using the same criteria as the Independent component analysis (ICA). Simultaneously, the noise is separated as an independent component. The isolation of representative noise samples is achieved using factor oblique rotation whereas noise identification is performed using factor pattern loadings. Extensive experimental results over divers real-life data sets have showed the efficiency and advantage of the proposed algorithm.We thankfully acknowledge the support of the Comision Interministerial de Ciencia y Tecnologa (CICYT) under contract No. TIN2015-65316-P which has partially funded this work.Peer ReviewedPostprint (author's final draft
High-transition-temperature superconductivity in the absence of the magnetic-resonance mode
The fundamental mechanism that gives rise to high-transition-temperature
(high-Tc) superconductivity in the copper oxide materials has been debated
since the discovery of the phenomenon. Recent work has focussed on a sharp
'kink' in the kinetic energy spectra of the electrons as a possible signature
of the force that creates the superconducting state. The kink has been related
to a magnetic resonance and also to phonons. Here we report that infrared
spectra of Bi2Sr2CaCu2O(8+d), (Bi-2212) show that this sharp feature can be
separated from a broad background and, interestingly, weakens with doping
before disappearing completely at a critical doping level of 0.23 holes per
copper atom. Superconductivity is still strong in terms of the transition
temperature (Tc approx 55 K), so our results rule out both the magnetic
resonance peak and phonons as the principal cause of high-Tc superconductivity.
The broad background, on the other hand, is a universal property of the copper
oxygen plane and a good candidate for the 'glue' that binds the electrons.Comment: 4 pages, 3 figure
Experimental realisation of Shor's quantum factoring algorithm using qubit recycling
Quantum computational algorithms exploit quantum mechanics to solve problems
exponentially faster than the best classical algorithms. Shor's quantum
algorithm for fast number factoring is a key example and the prime motivator in
the international effort to realise a quantum computer. However, due to the
substantial resource requirement, to date, there have been only four
small-scale demonstrations. Here we address this resource demand and
demonstrate a scalable version of Shor's algorithm in which the n qubit control
register is replaced by a single qubit that is recycled n times: the total
number of qubits is one third of that required in the standard protocol.
Encoding the work register in higher-dimensional states, we implement a
two-photon compiled algorithm to factor N=21. The algorithmic output is
distinguishable from noise, in contrast to previous demonstrations. These
results point to larger-scale implementations of Shor's algorithm by harnessing
scalable resource reductions applicable to all physical architectures.Comment: 7 pages, 3 figure
- …