248 research outputs found

    Biodegradation kinetics of Linear Alkylbenzene Sulphonates in sea water

    Get PDF
    This article reports the primary biodegradation kinetics of linear alkylbenzene sulphonates (LAS) in sea water from the Bay of Cadiz (South West of the Iberian Peninsula). The authors used the biodegradation test guideline proposed by the Office of Prevention, Pesticides, and Toxic Substances of the United States Environmental Protection Agency; 835.3160 “Biodegradability in sea water” in its shake flask variant. High performance liquid chromatography (HPLC) has been employed for the analysis of the surfactant material. The surfactant shows a primary biodegradation kinetic in accordance with a logistic model, the kinetic parameters t50 and lag time were calculated by means of a easy quantitative procedure introduced. Mean values of 6.15 ± 0.45 and 6.67 ± 0.6 days were obtained for t50 and lag time, respectively. These results indicate that although LAS has a high primary biodegradation rate in sea water, it biodegrades slower than in similar tests conducted in river water

    A proposed methodology for the assessment of arsenic, nickel, cadmium and lead levels in ambient air

    Get PDF
    Air quality assessment, required by the European Union (EU) Air Quality Directive, Directive 2008/50/EC, is part of the functions attributed to Environmental Management authorities. Based on the cost and time consumption associated with the experimental works required for the air quality assessment in relation to the EU-regulated metal and metalloids, other methods such as modelling or objective estimation arise as competitive alternatives when, in accordance with the Air Quality Directive, the levels of pollutants permit their use at a specific location. This work investigates the possibility of using statistical models based on Partial Least Squares Regression (PLSR) and Artificial Neural Networks (ANNs) to estimate the levels of arsenic (As), cadmium (Cd), nickel (Ni) and lead (Pb) in ambient air and their application for policy purposes. A methodology comprising the main steps that should be taken into consideration to prepare the input database, develop the model and evaluate their performance is proposed and applied to a case of study in Santander (Spain). It was observed that even though these approaches present some difficulties in estimating the individual sample concentrations, having an equivalent performance they can be considered valid for the estimation of the mean values - those to be compared with the limit/target values - fulfilling the uncertainty requirements in the context of the Air Quality Directive. Additionally, the influence of the consideration of input variables related to atmospheric stability on the performance of the studied statistical models has been determined. Although the consideration of these variables as additional inputs had no effect on As and Cd models, they did yield an improvement for Pb and Ni, especially with regard to ANN models.This work was supported by the Spanish Ministry of Economy and Competitiveness (MINECO) through the Projects CTM2010-16068 and CTM2013-43904R. GermĂĄn Santos thanks MINECO for his FPI research fellowship (BES-2011-047908)

    Rapid viral metagenomics using SMART-9N amplification and nanopore sequencing [version 2; peer review: 2 approved]

    Get PDF
    Emerging and re-emerging viruses are a global health concern. Genome sequencing as an approach for monitoring circulating viruses is currently hampered by complex and expensive methods. Untargeted, metagenomic nanopore sequencing can provide genomic information to identify pathogens, prepare for or even prevent outbreaks. SMART (Switching Mechanism at the 5' end of RNA Template) is a popular approach for RNA-Seq but most current methods rely on oligo-dT priming to target polyadenylated mRNA molecules. We have developed two random primed SMART-Seq approaches, a sequencing agnostic approach 'SMART-9N' and a version compatible rapid adapters  available from Oxford Nanopore Technologies 'Rapid SMART-9N'. The methods were developed using viral isolates, clinical samples, and compared to a gold-standard amplicon-based method. From a Zika virus isolate the SMART-9N approach recovered 10kb of the 10.8kb RNA genome in a single nanopore read. We also obtained full genome coverage at a high depth coverage using the Rapid SMART-9N, which takes only 10 minutes and costs up to 45% less than other methods. We found the limits of detection of these methods to be 6 focus forming units (FFU)/mL with 99.02% and 87.58% genome coverage for SMART-9N and Rapid SMART-9N respectively. Yellow fever virus plasma samples and SARS-CoV-2 nasopharyngeal samples previously confirmed by RT-qPCR with a broad range of Ct-values were selected for validation. Both methods produced greater genome coverage when compared to the multiplex PCR approach and we obtained the longest single read of this study (18.5 kb) with a SARS-CoV-2 clinical sample, 60% of the virus genome using the Rapid SMART-9N method. This work demonstrates that SMART-9N and Rapid SMART-9N are sensitive, low input, and long-read compatible alternatives for RNA virus detection and genome sequencing and Rapid SMART-9N improves the cost, time, and complexity of laboratory work

    Exploratory analysis of protein translation regulatory networks using hierarchical random graphs

    Get PDF
    Abstract Background Protein translation is a vital cellular process for any living organism. The availability of interaction databases provides an opportunity for researchers to exploit the immense amount of data in silico such as studying biological networks. There has been an extensive effort using computational methods in deciphering the transcriptional regulatory networks. However, research on translation regulatory networks has caught little attention in the bioinformatics and computational biology community. Results In this paper, we present an exploratory analysis of yeast protein translation regulatory networks using hierarchical random graphs. We derive a protein translation regulatory network from a protein-protein interaction dataset. Using a hierarchical random graph model, we show that the network exhibits well organized hierarchical structure. In addition, we apply this technique to predict missing links in the network. Conclusions The hierarchical random graph mode can be a potentially useful technique for inferring hierarchical structure from network data and predicting missing links in partly known networks. The results from the reconstructed protein translation regulatory networks have potential implications for better understanding mechanisms of translational control from a system’s perspective

    Declining mortality following acute myocardial infarction in the Department of Veterans Affairs Health Care System

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mortality from acute myocardial infarction (AMI) is declining worldwide. We sought to determine if mortality in the Veterans Health Administration (VHA) has also been declining.</p> <p>Methods</p> <p>We calculated 30-day mortality rates between 2004 and 2006 using data from the VHA External Peer Review Program (EPRP), which entails detailed abstraction of records of all patients with AMI. To compare trends within VHA with other systems of care, we estimated relative mortality rates between 2000 and 2005 for all males 65 years and older with a primary diagnosis of AMI using administrative data from the VHA Patient Treatment File and the Medicare Provider Analysis and Review (MedPAR) files.</p> <p>Results</p> <p>Using EPRP data on 11,609 patients, we observed a statistically significant decline in adjusted 30-day mortality following AMI in VHA from 16.3% in 2004 to 13.9% in 2006, a relative decrease of 15% and a decrease in the odds of dying of 10% per year (p = .011). Similar declines were found for in-hospital and 90-day mortality.</p> <p>Based on administrative data on 27,494 VHA patients age 65 years and older and 789,400 Medicare patients, 30-day mortality following AMI declined from 16.0% during 2000-2001 to 15.7% during 2004-June 2005 in VHA and from 16.7% to 15.5% in private sector hospitals. After adjusting for patient characteristics and hospital effects, the overall relative odds of death were similar for VHA and Medicare (odds ratio 1.02, 95% C.I. 0.96-1.08).</p> <p>Conclusion</p> <p>Mortality following AMI within VHA has declined significantly since 2003 at a rate that parallels that in Medicare-funded hospitals.</p

    Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    Get PDF
    Background: The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by x 6 SD, or with the standard error of the mean, x 6 SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the ‘‘95 % range check’’, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x /, times-divide, and notation. Analogous to x 6 SD, it connects the multiplicative (or geometric) mean x * and the multiplicative standard deviation s * in the form x * x /s*, that is advantageous and recommended. Conclusions/Significance: The corresponding shift from the symmetric to the asymmetric view will substantially increas

    Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits

    Get PDF
    Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks
    • 

    corecore