229 research outputs found
Homeostatic dysregulation proceeds in parallel in multiple physiological systems
Abstract: An increasing number of aging researchers believes that multisystem physiological dysregulation may be a key biological mechanism of aging, but evidence of this has been sparse. Here, we used biomarker data on nearly 33 000 individuals from four large datasets to test for the presence of multi-system dysregulation. We grouped 37 biomarkers into six a priori groupings representing physiological systems (lipids, immune, oxygen transport, liver function, vitamins, and electrolytes), then calculated dysregulation scores for each system in each individual using statistical distance. Correlations among dysregulation levels across systems were generally weak but significant. Comparison of these results to dysregulation in arbitrary ‘systems’ generated by random grouping of biomarkers showed that a priori knowledge effectively distinguished the true systems in which dysregulation proceeds most independently. In other words, correlations among dysregulation levels were higher using arbitrary systems, indicating that only a priori systems identified distinct dysregulation processes. Additionally, dysregulation of most systems increased with age and significantly predicted multiple health outcomes including mortality, frailty, diabetes, heart disease, and number of chronic diseases. The six systems differed in how well their dysregulation scores predicted health outcomes and age. These findings present the first unequivocal demonstration of integrated multi-system physiological dysregulation during aging, demonstrating that physiological dysregulation proceeds neither as a single global process nor as a completely independent process in different systems, but rather as a set of system-specific processes likely linked through weak feedback effects. These processes – probably many more than the six measured here – are implicated in aging
Temperature effects on zoeal morphometric traits and intraspecific variability in the hairy crab Cancer setosus across latitude
International audiencePhenotypic plasticity is an important but often ignored ability that enables organisms, within species-specific physiological limits, to respond to gradual or sudden extrinsic changes in their environment. In the marine realm, the early ontogeny of decapod crustaceans is among the best known examples to demonstrate a temperature-dependent phenotypic response. Here, we present morphometric results of larvae of the hairy crab , the embryonic development of which took place at different temperatures at two different sites (Antofagasta, 23°45′ S; Puerto Montt, 41°44′ S) along the Chilean Coast. Zoea I larvae from Puerto Montt were significantly larger than those from Antofagasta, when considering embryonic development at the same temperature. Larvae from Puerto Montt reared at 12 and 16°C did not differ morphometrically, but sizes of larvae from Antofagasta kept at 16 and 20°C did, being larger at the colder temperature. Zoea II larvae reared in Antofagasta at three temperatures (16, 20, and 24°C) showed the same pattern, with larger larvae at colder temperatures. Furthermore, larvae reared at 24°C, showed deformations, suggesting that 24°C, which coincides with temperatures found during strong EL Niño events, is indicative of the upper larval thermal tolerance limit. is exposed to a wide temperature range across its distribution range of about 40° of latitude. Phenotypic plasticity in larval offspring does furthermore enable this species to locally respond to the inter-decadal warming induced by El Niño. Morphological plasticity in this species does support previously reported energetic trade-offs with temperature throughout early ontogeny of this species, indicating that plasticity may be a key to a species' success to occupy a wide distribution range and/or to thrive under highly variable habitat conditions
Subfamily specific conservation profiles for proteins based on n-gram patterns
<p>Abstract</p> <p>Background</p> <p>A new algorithm has been developed for generating conservation profiles that reflect the evolutionary history of the subfamily associated with a query sequence. It is based on n-gram patterns (NP{<it>n,m</it>}) which are sets of <it>n </it>residues and <it>m </it>wildcards in windows of size <it>n+m</it>. The generation of conservation profiles is treated as a signal-to-noise problem where the signal is the count of n-gram patterns in target sequences that are similar to the query sequence and the noise is the count over all target sequences. The signal is differentiated from the noise by applying singular value decomposition to sets of target sequences rank ordered by similarity with respect to the query.</p> <p>Results</p> <p>The new algorithm was used to construct 4,248 profiles from 120 randomly selected Pfam-A families. These were compared to profiles generated from multiple alignments using the consensus approach. The two profiles were similar whenever the subfamily associated with the query sequence was well represented in the multiple alignment. It was possible to construct subfamily specific conservation profiles using the new algorithm for subfamilies with as few as five members. The speed of the new algorithm was comparable to the multiple alignment approach.</p> <p>Conclusion</p> <p>Subfamily specific conservation profiles can be generated by the new algorithm without aprioi knowledge of family relationships or domain architecture. This is useful when the subfamily contains multiple domains with different levels of representation in protein databases. It may also be applicable when the subfamily sample size is too small for the multiple alignment approach.</p
Automatic Network Fingerprinting through Single-Node Motifs
Complex networks have been characterised by their specific connectivity
patterns (network motifs), but their building blocks can also be identified and
described by node-motifs---a combination of local network features. One
technique to identify single node-motifs has been presented by Costa et al. (L.
D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett.,
87, 1, 2009). Here, we first suggest improvements to the method including how
its parameters can be determined automatically. Such automatic routines make
high-throughput studies of many networks feasible. Second, the new routines are
validated in different network-series. Third, we provide an example of how the
method can be used to analyse network time-series. In conclusion, we provide a
robust method for systematically discovering and classifying characteristic
nodes of a network. In contrast to classical motif analysis, our approach can
identify individual components (here: nodes) that are specific to a network.
Such special nodes, as hubs before, might be found to play critical roles in
real-world networks.Comment: 16 pages (4 figures) plus supporting information 8 pages (5 figures
Genetic divergence is not the same as phenotypic divergence
Far too often, phenotypic divergence has been misinterpreted as genetic divergence, and based on phenotypic divergence, genetic divergence has been indicated. We have attempted to disprove this statement and call for the differentiation of phenotypic and genotypic variation
Semi-automated non-target processing in GC × GC–MS metabolomics analysis: applicability for biomedical studies
Due to the complexity of typical metabolomics samples and the many steps required to obtain quantitative data in GC × GC–MS consisting of deconvolution, peak picking, peak merging, and integration, the unbiased non-target quantification of GC × GC–MS data still poses a major challenge in metabolomics analysis. The feasibility of using commercially available software for non-target processing of GC × GC–MS data was assessed. For this purpose a set of mouse liver samples (24 study samples and five quality control (QC) samples prepared from the study samples) were measured with GC × GC–MS and GC–MS to study the development and progression of insulin resistance, a primary characteristic of diabetes type 2. A total of 170 and 691 peaks were quantified in, respectively, the GC–MS and GC × GC–MS data for all study and QC samples. The quantitative results for the QC samples were compared to assess the quality of semi-automated GC × GC–MS processing compared to targeted GC–MS processing which involved time-consuming manual correction of all wrongly integrated metabolites and was considered as golden standard. The relative standard deviations (RSDs) obtained with GC × GC–MS were somewhat higher than with GC–MS, due to less accurate processing. Still, the biological information in the study samples was preserved and the added value of GC × GC–MS was demonstrated; many additional candidate biomarkers were found with GC × GC–MS compared to GC–MS
NR-2L: A Two-Level Predictor for Identifying Nuclear Receptor Subfamilies Based on Sequence-Derived Features
Nuclear receptors (NRs) are one of the most abundant classes of transcriptional regulators in animals. They regulate diverse functions, such as homeostasis, reproduction, development and metabolism. Therefore, NRs are a very important target for drug development. Nuclear receptors form a superfamily of phylogenetically related proteins and have been subdivided into different subfamilies due to their domain diversity. In this study, a two-level predictor, called NR-2L, was developed that can be used to identify a query protein as a nuclear receptor or not based on its sequence information alone; if it is, the prediction will be automatically continued to further identify it among the following seven subfamilies: (1) thyroid hormone like (NR1), (2) HNF4-like (NR2), (3) estrogen like, (4) nerve growth factor IB-like (NR4), (5) fushi tarazu-F1 like (NR5), (6) germ cell nuclear factor like (NR6), and (7) knirps like (NR0). The identification was made by the Fuzzy K nearest neighbor (FK-NN) classifier based on the pseudo amino acid composition formed by incorporating various physicochemical and statistical features derived from the protein sequences, such as amino acid composition, dipeptide composition, complexity factor, and low-frequency Fourier spectrum components. As a demonstration, it was shown through some benchmark datasets derived from the NucleaRDB and UniProt with low redundancy that the overall success rates achieved by the jackknife test were about 93% and 89% in the first and second level, respectively. The high success rates indicate that the novel two-level predictor can be a useful vehicle for identifying NRs and their subfamilies. As a user-friendly web server, NR-2L is freely accessible at either http://icpr.jci.edu.cn/bioinfo/NR2L or http://www.jci-bioinfo.cn/NR2L. Each job submitted to NR-2L can contain up to 500 query protein sequences and be finished in less than 2 minutes. The less the number of query proteins is, the shorter the time will usually be. All the program codes for NR-2L are available for non-commercial purpose upon request
Non-stationary covariance function modelling in 2D least-squares collocation
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the ob-served data. However, the assumption that the spatial dependence is constant through-out the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage fordealing with non-stationarity in geodetic data. We then compared stationary and non-stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC
1/f2 Characteristics and Isotropy in the Fourier Power Spectra of Visual Art, Cartoons, Comics, Mangas, and Different Categories of Photographs
Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f2 characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f2 characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains to be investigated
- …