80 research outputs found
Applying refinement to the use of mice and rats in rheumatoid arthritis research
Rheumatoid arthritis (RA) is a painful, chronic disorder and there is currently an unmet need for effective therapies that will benefit a wide range of patients. The research and development process for therapies and treatments currently involves in vivo studies, which have the potential to cause discomfort, pain or distress. This Working Group report focuses on identifying causes of suffering within commonly used mouse and rat ‘models’ of RA, describing practical refinements to help reduce suffering and improve welfare without compromising the scientific objectives. The report also discusses other, relevant topics including identifying and minimising sources of variation within in vivo RA studies, the potential to provide pain relief including analgesia, welfare assessment, humane endpoints, reporting standards and the potential to replace animals in RA research
Probing Metagenomics by Rapid Cluster Analysis of Very Large Datasets
BACKGROUND: The scale and diversity of metagenomic sequencing projects challenge both our technical and conceptual approaches in gene and genome annotations. The recent Sorcerer II Global Ocean Sampling (GOS) expedition yielded millions of predicted protein sequences, which significantly altered the landscape of known protein space by more than doubling its size and adding thousands of new families (Yooseph et al., 2007 PLoS Biol 5, e16). Such datasets, not only by their sheer size, but also by many other features, defy conventional analysis and annotation methods. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we describe an approach for rapid analysis of the sequence diversity and the internal structure of such very large datasets by advanced clustering strategies using the newly modified CD-HIT algorithm. We performed a hierarchical clustering analysis on the 17.4 million Open Reading Frames (ORFs) identified from the GOS study and found over 33 thousand large predicted protein clusters comprising nearly 6 million sequences. Twenty percent of these clusters did not match known protein families by sequence similarity search and might represent novel protein families. Distributions of the large clusters were illustrated on organism composition, functional class, and sample locations. CONCLUSION/SIGNIFICANCE: Our clustering took about two orders of magnitude less computational effort than the similar protein family analysis of original GOS study. This approach will help to analyze other large metagenomic datasets in the future. A Web server with our clustering results and annotations of predicted protein clusters is available online at http://tools.camera.calit2.net/gos under the CAMERA project
Episodic Memory and Appetite Regulation in Humans
Psychological and neurobiological evidence implicates hippocampal-dependent memory processes in the control of hunger and food intake. In humans, these have been revealed in the hyperphagia that is associated with amnesia. However, it remains unclear whether 'memory for recent eating' plays a significant role in neurologically intact humans. In this study we isolated the extent to which memory for a recently consumed meal influences hunger and fullness over a three-hour period. Before lunch, half of our volunteers were shown 300 ml of soup and half were shown 500 ml. Orthogonal to this, half consumed 300 ml and half consumed 500 ml. This process yielded four separate groups (25 volunteers in each). Independent manipulation of the 'actual' and 'perceived' soup portion was achieved using a computer-controlled peristaltic pump. This was designed to either refill or draw soup from a soup bowl in a covert manner. Immediately after lunch, self-reported hunger was influenced by the actual and not the perceived amount of soup consumed. However, two and three hours after meal termination this pattern was reversed - hunger was predicted by the perceived amount and not the actual amount. Participants who thought they had consumed the larger 500-ml portion reported significantly less hunger. This was also associated with an increase in the 'expected satiation' of the soup 24-hours later. For the first time, this manipulation exposes the independent and important contribution of memory processes to satiety. Opportunities exist to capitalise on this finding to reduce energy intake in humans
Full Sequence and Comparative Analysis of the Plasmid pAPEC-1 of Avian Pathogenic E. coli χ7122 (O78∶K80∶H9)
(APEC), are very diverse. They cause a complex of diseases in Human, animals, and birds. Even though large plasmids are often associated with the virulence of ExPEC, their characterization is still in its infancy., are also present in the sequence of pAPEC-1. The comparison of the pAPEC-1 sequence with the two available plasmid sequences reveals more gene loss and reorganization than previously appreciated. The presence of pAPEC-1-associated genes is assessed in human ExPEC by PCR. Many patterns of association between genes are found.The pathotype typical of pAPEC-1 was present in some human strains, which indicates a horizontal transfer between strains and the zoonotic risk of APEC strains. ColV plasmids could have common virulence genes that could be acquired by transposition, without sharing genes of plasmid function
Fast Identification and Removal of Sequence Contamination from Genomic and Metagenomic Datasets
High-throughput sequencing technologies have strongly impacted microbiology, providing a rapid and cost-effective way of generating draft genomes and exploring microbial diversity. However, sequences obtained from impure nucleic acid preparations may contain DNA from sources other than the sample. Those sequence contaminations are a serious concern to the quality of the data used for downstream analysis, causing misassembly of sequence contigs and erroneous conclusions. Therefore, the removal of sequence contaminants is a necessary and required step for all sequencing projects. We developed DeconSeq, a robust framework for the rapid, automated identification and removal of sequence contamination in longer-read datasets (150 bp mean read length). DeconSeq is publicly available as standalone and web-based versions. The results can be exported for subsequent analysis, and the databases used for the web-based version are automatically updated on a regular basis. DeconSeq categorizes possible contamination sequences, eliminates redundant hits with higher similarity to non-contaminant genomes, and provides graphical visualizations of the alignment results and classifications. Using DeconSeq, we conducted an analysis of possible human DNA contamination in 202 previously published microbial and viral metagenomes and found possible contamination in 145 (72%) metagenomes with as high as 64% contaminating sequences. This new framework allows scientists to automatically detect and efficiently remove unwanted sequence contamination from their datasets while eliminating critical limitations of current methods. DeconSeq's web interface is simple and user-friendly. The standalone version allows offline analysis and integration into existing data processing pipelines. DeconSeq's results reveal whether the sequencing experiment has succeeded, whether the correct sample was sequenced, and whether the sample contains any sequence contamination from DNA preparation or host. In addition, the analysis of 202 metagenomes demonstrated significant contamination of the non-human associated metagenomes, suggesting that this method is appropriate for screening all metagenomes. DeconSeq is available at http://deconseq.sourceforge.net/
Meta-omics approaches to understand and improve wastewater treatment systems
Biological treatment of wastewaters depends on microbial processes, usually carried out by mixed microbial communities. Environmental and operational factors can affect microorganisms and/or impact microbial community function, and this has repercussion in bioreactor performance. Novel high-throughput molecular methods (metagenomics, metatranscriptomics, metaproteomics, metabolomics) are providing detailed knowledge on the microorganisms governing wastewater treatment systems and on their metabolic capabilities. The genomes of uncultured microbes with key roles in wastewater treatment plants (WWTP), such as the polyphosphate-accumulating microorganism Candidatus Accumulibacter phosphatis, the nitrite oxidizer Candidatus Nitrospira defluvii or the anammox bacterium Candidatus Kuenenia stuttgartiensis are now available through metagenomic studies. Metagenomics allows to genetically characterize full-scale WWTP and provides information on the lifestyles and physiology of key microorganisms for wastewater treatment. Integrating metagenomic data of microorganisms with metatranscriptomic, metaproteomic and metabolomic information provides a better understanding of the microbial responses to perturbations or environmental variations. Data integration may allow the creation of predictive behavior models of wastewater ecosystems, which could help in an improved exploitation of microbial processes. This review discusses the impact of meta-omic approaches on the understanding of wastewater treatment processes, and the implications of these methods for the optimization and design of wastewater treatment bioreactors.Research was supported by the
Spanish Ministry of Education and Science (Contract Project
CTQ2007-64324 and CONSOLIDER-CSD 2007-00055) and
the Regional Government of Castilla y Leon (Ref. VA038A07).
Research of AJMS is supported by the European Research
Council (Grant 323009
In quest of a systematic framework for unifying and defining nanoscience
This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a “central paradigm” (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core–shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this perspective as a modest first step toward more clearly defining synthetic nanochemistry as well as providing a systematic framework for unifying nanoscience. With further progress, one should anticipate the evolution of future nanoperiodic table(s) suitable for predicting important risk/benefit boundaries in the field of nanoscience
All-sky search for gravitational-wave bursts in the second joint LIGO-Virgo run
We present results from a search for gravitational-wave bursts in the data
collected by the LIGO and Virgo detectors between July 7, 2009 and October 20,
2010: data are analyzed when at least two of the three LIGO-Virgo detectors are
in coincident operation, with a total observation time of 207 days. The
analysis searches for transients of duration < 1 s over the frequency band
64-5000 Hz, without other assumptions on the signal waveform, polarization,
direction or occurrence time. All identified events are consistent with the
expected accidental background. We set frequentist upper limits on the rate of
gravitational-wave bursts by combining this search with the previous LIGO-Virgo
search on the data collected between November 2005 and October 2007. The upper
limit on the rate of strong gravitational-wave bursts at the Earth is 1.3
events per year at 90% confidence. We also present upper limits on source rate
density per year and Mpc^3 for sample populations of standard-candle sources.
As in the previous joint run, typical sensitivities of the search in terms of
the root-sum-squared strain amplitude for these waveforms lie in the range 5
10^-22 Hz^-1/2 to 1 10^-20 Hz^-1/2. The combination of the two joint runs
entails the most sensitive all-sky search for generic gravitational-wave bursts
and synthesizes the results achieved by the initial generation of
interferometric detectors.Comment: 15 pages, 7 figures: data for plots and archived public version at
https://dcc.ligo.org/cgi-bin/DocDB/ShowDocument?docid=70814&version=19, see
also the public announcement at
http://www.ligo.org/science/Publication-S6BurstAllSky
Chemokines in rheumatoid arthritis
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46938/1/281_2004_Article_BF00832002.pd
- …