538 research outputs found
Global features of upper-tropospheric zonal wind and thermal fields during anomalous monsoon situations
Global analyses of mean monthly zonal wind component and temperature at 200, 150 and 100 mb levels have been made for the region between 60°N and 60°S, for the months May through September during two poor monsoon years (1972 and 1979) and a good monsoon year (1975). Prominent and consistent contrasting features of the zonal wind and thermal fields have been identified, with reference to the monsoon performance over India. It has been noticed that the areal spreading of easterlies over the tropics and extratropics is significantly more during a good monsoon year. Shifting of the axis of the tropical easterly jet stream to a higher level and generally stronger easterlies also characterize good monsoon activity. The upper troposphere has been found to be considerably cooler during poor monsoon years
Scoliosis correction in children-anaesthetic challenge: A case report
Kyphoscoliosis is a challenging surgery to surgeons but even more challenging to anaesthesiologist to give anaesthesia and maintain it throughout the surgery and post operative pain relief and ventilation. Here we are describing the case of 3 years old male child weighing 9kg for surgical correction of spine deformity with instrumentation
Development of real time PCR for detection and quantitation of Dengue Viruses
<p>Abstract</p> <p>Background</p> <p>Dengue virus (DENV), a mosquito borne flavivirus is an important pathogen causing more than 50 million infections every year around the world. Dengue diagnosis depends on serology, which is not useful in the early phase of the disease and virus isolation, which is laborious and time consuming. There is need for a rapid, sensitive and high throughput method for detection of DENV in the early stages of the disease. Several real-time PCR assays have been described for dengue viruses, but there is scope for improvement. The new generation TaqMan Minor Groove Binding (MGB) probe approach was used to develop an improved real time RT-PCR (qRT-PCR) for DENV in this study.</p> <p>Results</p> <p>The 3'UTR of thirteen Indian strains of DENV was sequenced and aligned with 41 representative sequences from GenBank. A region conserved in all four serotypes was used to target primers and probes for the qRT-PCR. A single MGB probe and a single primer pair for all the four serotypes of DENV were designed. The sensitivity of the two step qRT-PCR assay was10 copies of RNA molecules per reaction. The specificity and sensitivity of the assay was 100% when tested with a panel of 39 known positive and negative samples. Viral RNA could be detected and quantitated in infected mouse brain, cell cultures, mosquitoes and clinical samples. Viral RNA could be detected in patients even after seroconversion till 10 days post onset of infection. There was no signal with Japanese Encephalitis (JE), West Nile (WN), Chikungunya (CHK) viruses or with Leptospira, <it>Plasmodium vivax</it>, <it>Plasmodium falciparum </it>and Rickettsia positive clinical samples.</p> <p>Conclusion</p> <p>We have developed a highly sensitive and specific qRT-PCR for detection and quantitation of dengue viruses. The assay will be a useful tool for differential diagnosis of dengue fever in a situation where a number of other clinically indistinguishable infectious diseases like malaria, Chikungunya, rickettsia and leptospira occur. The ability of the assay to detect DENV-2 in inoculated mosquitoes makes it a potential tool for detecting DENV in field-caught mosquitoes.</p
Confound-leakage: confound removal in machine learning leads to leakage
BACKGROUND: Machine learning (ML) approaches are a crucial component of modern data analysis in many fields, including epidemiology and medicine. Nonlinear ML methods often achieve accurate predictions, for instance, in personalized medicine, as they are capable of modeling complex relationships between features and the target. Problematically, ML models and their predictions can be biased by confounding information present in the features. To remove this spurious signal, researchers often employ featurewise linear confound regression (CR). While this is considered a standard approach for dealing with confounding, possible pitfalls of using CR in ML pipelines are not fully understood. RESULTS: We provide new evidence that, contrary to general expectations, linear confound regression can increase the risk of confounding when combined with nonlinear ML approaches. Using a simple framework that uses the target as a confound, we show that information leaked via CR can increase null or moderate effects to near-perfect prediction. By shuffling the features, we provide evidence that this increase is indeed due to confound-leakage and not due to revealing of information. We then demonstrate the danger of confound-leakage in a real-world clinical application where the accuracy of predicting attention-deficit/hyperactivity disorder is overestimated using speech-derived features when using depression as a confound. CONCLUSIONS: Mishandling or even amplifying confounding effects when building ML models due to confound-leakage, as shown, can lead to untrustworthy, biased, and unfair predictions. Our expose of the confound-leakage pitfall and provided guidelines for dealing with it can help create more robust and trustworthy ML models
Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review
Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual
participant data. For continuous outcomes, especially those with naturally skewed distributions, summary
information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal,
we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis.
Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with
missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane
Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited
reference searching and emailed topic experts to identify recent methodological developments. Details recorded
included the description of the method, the information required to implement the method, any underlying
assumptions and whether the method could be readily applied in standard statistical software. We provided a
summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios.
Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in
addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis
level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical
approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following
screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and
three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when
replacing a missing SD the approximation using the range minimised loss of precision and generally performed better
than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile
performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials
gave superior results.
Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median)
reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or
variability summary statistics within meta-analyses
OptFlux: an open-source software platform for in silico metabolic engineering
<p>Abstract</p> <p>Background</p> <p>Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications.</p> <p>Results</p> <p><it>OptFlux </it>is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes.</p> <p><it>OptFlux </it>also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms.</p> <p>The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. <it>OptFlux </it>has a visualization module that allows the analysis of the model structure that is compatible with the layout information of <it>Cell Designer</it>, allowing the superimposition of simulation results with the model graph.</p> <p>Conclusions</p> <p>The <it>OptFlux </it>software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community.</p> <p>Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models.</p
Reconstruction and analysis of genome-scale metabolic model of a photosynthetic bacterium
<p>Abstract</p> <p>Background</p> <p><it>Synechocystis </it>sp. PCC6803 is a cyanobacterium considered as a candidate photo-biological production platform - an attractive cell factory capable of using CO<sub>2 </sub>and light as carbon and energy source, respectively. In order to enable efficient use of metabolic potential of <it>Synechocystis </it>sp. PCC6803, it is of importance to develop tools for uncovering stoichiometric and regulatory principles in the <it>Synechocystis </it>metabolic network.</p> <p>Results</p> <p>We report the most comprehensive metabolic model of <it>Synechocystis </it>sp. PCC6803 available, <it>i</it>Syn669, which includes 882 reactions, associated with 669 genes, and 790 metabolites. The model includes a detailed biomass equation which encompasses elementary building blocks that are needed for cell growth, as well as a detailed stoichiometric representation of photosynthesis. We demonstrate applicability of <it>i</it>Syn669 for stoichiometric analysis by simulating three physiologically relevant growth conditions of <it>Synechocystis </it>sp. PCC6803, and through <it>in silico </it>metabolic engineering simulations that allowed identification of a set of gene knock-out candidates towards enhanced succinate production. Gene essentiality and hydrogen production potential have also been assessed. Furthermore, <it>i</it>Syn669 was used as a transcriptomic data integration scaffold and thereby we found metabolic hot-spots around which gene regulation is dominant during light-shifting growth regimes.</p> <p>Conclusions</p> <p><it>i</it>Syn669 provides a platform for facilitating the development of cyanobacteria as microbial cell factories.</p
Natural computation meta-heuristics for the in silico optimization of microbial strains
<p>Abstract</p> <p>Background</p> <p>One of the greatest challenges in Metabolic Engineering is to develop quantitative models and algorithms to identify a set of genetic manipulations that will result in a microbial strain with a desirable metabolic phenotype which typically means having a high yield/productivity. This challenge is not only due to the inherent complexity of the metabolic and regulatory networks, but also to the lack of appropriate modelling and optimization tools. To this end, Evolutionary Algorithms (EAs) have been proposed for <it>in silico </it>metabolic engineering, for example, to identify sets of gene deletions towards maximization of a desired physiological objective function. In this approach, each mutant strain is evaluated by resorting to the simulation of its phenotype using the Flux-Balance Analysis (FBA) approach, together with the premise that microorganisms have maximized their growth along natural evolution.</p> <p>Results</p> <p>This work reports on improved EAs, as well as novel Simulated Annealing (SA) algorithms to address the task of <it>in silico </it>metabolic engineering. Both approaches use a variable size set-based representation, thereby allowing the automatic finding of the best number of gene deletions necessary for achieving a given productivity goal. The work presents extensive computational experiments, involving four case studies that consider the production of succinic and lactic acid as the targets, by using <it>S. cerevisiae </it>and <it>E. coli </it>as model organisms. The proposed algorithms are able to reach optimal/near-optimal solutions regarding the production of the desired compounds and presenting low variability among the several runs.</p> <p>Conclusion</p> <p>The results show that the proposed SA and EA both perform well in the optimization task. A comparison between them is favourable to the SA in terms of consistency in obtaining optimal solutions and faster convergence. In both cases, the use of variable size representations allows the automatic discovery of the approximate number of gene deletions, without compromising the optimality of the solutions.</p
The PhyloPythiaS Web Server for Taxonomic Assignment of Metagenome Sequences
Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen
Industrial Systems Biology of Saccharomyces cerevisiae Enables Novel Succinic Acid Cell Factory.
Saccharomyces cerevisiae is the most well characterized eukaryote, the preferred microbial cell factory for the largest industrial biotechnology product (bioethanol), and a robust commerically compatible scaffold to be exploitted for diverse chemical production. Succinic acid is a highly sought after added-value chemical for which there is no native pre-disposition for production and accmulation in S. cerevisiae. The genome-scale metabolic network reconstruction of S. cerevisiae enabled in silico gene deletion predictions using an evolutionary programming method to couple biomass and succinate production. Glycine and serine, both essential amino acids required for biomass formation, are formed from both glycolytic and TCA cycle intermediates. Succinate formation results from the isocitrate lyase catalyzed conversion of isocitrate, and from the alpha-keto-glutarate dehydrogenase catalyzed conversion of alpha-keto-glutarate. Succinate is subsequently depleted by the succinate dehydrogenase complex. The metabolic engineering strategy identified included deletion of the primary succinate consuming reaction, Sdh3p, and interruption of glycolysis derived serine by deletion of 3-phosphoglycerate dehydrogenase, Ser3p/Ser33p. Pursuing these targets, a multi-gene deletion strain was constructed, and directed evolution with selection used to identify a succinate producing mutant. Physiological characterization coupled with integrated data analysis of transcriptome data in the metabolically engineered strain were used to identify 2nd-round metabolic engineering targets. The resulting strain represents a 30-fold improvement in succinate titer, and a 43-fold improvement in succinate yield on biomass, with only a 2.8-fold decrease in the specific growth rate compared to the reference strain. Intuitive genetic targets for either over-expression or interruption of succinate producing or consuming pathways, respectively, do not lead to increased succinate. Rather, we demonstrate how systems biology tools coupled with directed evolution and selection allows non-intuitive, rapid and substantial re-direction of carbon fluxes in S. cerevisiae, and hence show proof of concept that this is a potentially attractive cell factory for over-producing different platform chemicals
- …