35 research outputs found

    Assembling proteomics data as a prerequisite for the analysis of large scale experiments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments.</p> <p>Results</p> <p>In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of <it>Mycobacterium tuberculosis</it>, <it>Helicobacter pylori</it>, <it>Salmonella typhimurium </it>and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML.</p> <p>Conclusion</p> <p>The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk.</p

    Beneficial Effects of Estrogen in a Mouse Model of Cerebrovascular Insufficiency

    Get PDF
    BACKGROUND: The M(5) muscarinic acetylcholine receptor is known to play a crucial role in mediating acetylcholine dependent dilation of cerebral blood vessels. Previously, we reported that male M(5) muscarinic acetylcholine knockout mice (M5R(-/-) mice) suffer from a constitutive constriction of cerebral arteries, reduced cerebral blood flow, dendritic atrophy, and short-term memory loss, without necrosis and/or inflammation in the brain. METHODOLOGY/PRINCIPAL FINDINGS: We employed the Magnetic Resonance Angiography to study the area of the basilar artery in male and female M5R(-/-) mice. Here we show that female M5R(-/-) mice did not show the reduction in vascular area observed in male M5R(-/-) mice. However, ovariectomized female M5R(-/-) mice displayed phenotypic changes similar to male M5R(-/-) mice, strongly suggesting that estrogen plays a key role in the observed gender differences. We found that 17beta-estradiol (E2) induced nitric oxide release and ERK activation in a conditional immortalized mouse brain cerebrovascular endothelial cell line. Agonists of ERalpha, ERbeta, and GPR30 promoted ERK activation in this cell line. Moreover, in vivo magnetic resonance imaging studies showed that the cross section of the basilar artery was restored to normal in male M5R(-/-) mice treated with E2. Treatment with E2 also improved the performance of male M5R(-/-) mice in a cognitive test and reduced the atrophy of neural dendrites in the cerebral cortex and hippocampus. M5R(-/-) mice also showed astrocyte swelling in cortex and hippocampus using the three-dimensional reconstruction of electron microscope images. This phenotype was reversed by E2 treatment, similar to the observed deficits in dendrite morphology and the number of synapses. CONCLUSIONS/SIGNIFICANCE: Our findings indicate that M5R(-/-) mice represent an excellent novel model system to study the beneficial effects of estrogen on cerebrovascular function and cognition. E2 may offer new therapeutic perspectives for the treatment of cerebrovascular insufficiency related memory dysfunction

    Burden of Illness and Quality of Life in Tuberous Sclerosis Complex: Findings From the TOSCA Study

    Get PDF
    Research on tuberous sclerosis complex (TSC) to date has focused mainly on the physical manifestations of the disease. In contrast, the psychosocial impact of TSC has received far less attention. The aim of this study was therefore to examine the impact of TSC on health, quality of life (QoL), and psychosocial well-being of individuals with TSC and their families. Questionnaires with disease-specific questions on burden of illness (BOI) and validated QoL questionnaires were used. After completion of additional informed consent, we included 143 individuals who participated in the TOSCA (TuberOus SClerosis registry to increase disease Awareness) study. Our results highlighted the substantial burden of TSC on the personal lives of individuals with TSC and their families. Nearly half of the patients experienced negative progress in their education or career due to TSC (42.1%), as well as many of their caregivers (17.6% employed; 58.8% unemployed). Most caregivers (76.5%) indicated that TSC affected family life, and social and working relationships. Further, well-coordinated care was lacking: a smooth transition from pediatric to adult care was mentioned by only 36.8% of adult patients, and financial, social, and psychological support in 21.1, 0, and 7.9%, respectively. In addition, the moderate rates of pain/discomfort (35%) and anxiety/depression (43.4%) reported across all ages and levels of disease demonstrate the high BOI and low QoL in this vulnerable population

    The Current State of Proteomics in GI Oncology

    Get PDF
    Proteomics refers to the study of the entire set of proteins in a given cell or tissue. With the extensive development of protein separation, mass spectrometry, and bioinformatics technologies, clinical proteomics has shown its potential as a powerful approach for biomarker discovery, particularly in the area of oncology. More than 130 exploratory studies have defined candidate markers in serum, gastrointestinal (GI) fluids, or cancer tissue. In this article, we introduce the commonly adopted proteomic technologies and describe results of a comprehensive review of studies that have applied these technologies to GI oncology, with a particular emphasis on developments in the last 3 years. We discuss reasons why the more than 130 studies to date have had little discernible clinical impact, and we outline steps that may allow proteomics to realize its promise for early detection of disease, monitoring of disease recurrence, and identification of targets for individualized therapy

    Relativistic Dynamics and Extreme Mass Ratio Inspirals

    Full text link
    It is now well-established that a dark, compact object (DCO), very likely a massive black hole (MBH) of around four million solar masses is lurking at the centre of the Milky Way. While a consensus is emerging about the origin and growth of supermassive black holes (with masses larger than a billion solar masses), MBHs with smaller masses, such as the one in our galactic centre, remain understudied and enigmatic. The key to understanding these holes - how some of them grow by orders of magnitude in mass - lies in understanding the dynamics of the stars in the galactic neighbourhood. Stars interact with the central MBH primarily through their gradual inspiral due to the emission of gravitational radiation. Also stars produce gases which will subsequently be accreted by the MBH through collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the MBH and progress in understanding them requires theoretical work in preparation for future gravitational radiation millihertz missions and X-ray observatories. In particular, a unique probe of these regions is the gravitational radiation that is emitted by some compact stars very close to the black holes and which could be surveyed by a millihertz gravitational wave interferometer scrutinizing the range of masses fundamental to understanding the origin and growth of supermassive black holes. By extracting the information carried by the gravitational radiation, we can determine the mass and spin of the central MBH with unprecedented precision and we can determine how the holes "eat" stars that happen to be near them.Comment: Update from the first version, 151 pages, accepted for publication @ Living Reviews in Relativit

    Calculating least risk paths in 3D indoor space

    Get PDF
    Over the last couple of years, research on indoor environments has gained a fresh impetus; more specifically applications that support navigation and wayfinding have become one of the booming industries. Indoor navigation research currently covers the technological aspect of indoor positioning and the modelling of indoor space. The algorithmic development to support navigation has so far been left mostly untouched, as most applications mainly rely on adapting Dijkstra's shortest path algorithm to an indoor network. However, alternative algorithms for outdoor navigation have been proposed adding a more cognitive notion to the calculated paths and as such adhering to the natural wayfinding behaviour (e.g. simplest paths, least risk paths). These algorithms are currently restricted to outdoor applications. The need for indoor cognitive algorithms is highlighted by a more challenged navigation and orientation due to the specific indoor structure (e.g. fragmentation, less visibility, confined areas…). As such, the clarity and easiness of route instructions is of paramount importance when distributing indoor routes. A shortest or fastest path indoors not necessarily aligns with the cognitive mapping of the building. Therefore, the aim of this research is to extend those richer cognitive algorithms to three-dimensional indoor environments. More specifically for this paper, we will focus on the application of the least risk path algorithm of Grum (2005) to an indoor space. The algorithm as proposed by Grum (2005) is duplicated and tested in a complex multi-storey building. The results of several least risk path calculations are compared to the shortest paths in indoor environments in terms of total length, improvement in route description complexity and number of turns. Several scenarios are tested in this comparison: paths covering a single floor, paths crossing several building wings and/or floors. Adjustments to the algorithm are proposed to be more aligned to the specific structure of indoor environments (e.g. no turn restrictions, restricted usage of rooms, vertical movement) and common wayfinding strategies indoors. In a later stage, other cognitive algorithms will be implemented and tested in both an indoor and combined indoor-outdoor setting, in an effort to improve the overall user experience during navigation in indoor environments

    Classification of information about proteins

    No full text
    The use of advanced high throughput technology applied to proteomics results in the production of large volumes of information rich data. This data requires considerable knowledge management to allow biologists and bioinformaticians to access and understand the information in the context of their experiments. As the volume of data increases, the results from these high throughput experiments will provide the foundations for advancing proteome biology. In this chapter, we consider the challenges of information integration in proteomics from the perspective of researchers using information technology as an integral part of their discovery process. We firstly describe the information about proteins that is collected from high throughput experimentation and how this is managed. We then describe how protein ontologies can be used to classify this information. Finally we discuss some of the uses of protein classification systems and the biological challenges in proteomics which they help to resolve
    corecore