451 research outputs found
Evaluation of a metagenomic detection technique for human enteric bacteria in retail chicken
enteric bacteria in artificially contaminated chicken sample. Tests were performed in inoculated chicken samples using Salmonella enterica and Aeromonas hydrophila, with dilutions of 106,105,104 CFU/ml. We have developed a direct metagenomic (chicken DNA, inoculated bacterial DNA and endogenous microbial DNA if any) PCR technique for detection of bacteria from this food metagenome. Amplification of respective bacterial 16S rRNA region was performed. PCR conditions were optimized and amplification of Salmonella enterica specific DNA was achieved in all samples inoculated with different concentration of bacterial suspension. Aeromonas hydrophila infected tissues failed to reveal a specific amplification even after several modifications in gradient PCR. Interestingly, the control (uninoculated) chicken tissues also exhibited a less intense amplification of similar size DNA to target, indicating the possible endogenous contamination of the chicken meat obtained from the retail shop for our analysis
Recommended from our members
Tempering of Low-Temperature Bainite
Electron microscopy, X-ray diffraction, and atom probe tomography have been used to identify the changes which occur during the tempering of a carbide-free bainitic steel transformed at 473 K (200 °C). Partitioning of solute between ferrite and thin-films of retained austenite was observed on tempering at 673 K (400 °C) for 30 minutes. After tempering at 673 K (400 °C) and 773 K (500 °C) for 30 minutes, cementite was observed in the form of nanometre scale precipitates. Proximity histograms showed that the partitioning of solutes other than silicon from the cementite was slight at 673 K (400 °C) and more obvious at 773 K (500 °C). In both cases, the nanometre scale carbides are greatly depleted in silicon.The authors are grateful to the Engineering and Physical Sciences Research Council, TATA Steel Europe and The Worshipful Company of Ironmongers for supporting this research. Research supported through a user project supported by ORNL’s Center for Nanophase Materials Sciences (CNMS), which is sponsored by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy. This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy
Does the revised cardiac risk index predict cardiac complications following elective lung resection?
Background:
Revised Cardiac Risk Index (RCRI) score and Thoracic Revised Cardiac Risk Index (ThRCRI) score were developed to predict the risks of postoperative major cardiac complications in generic surgical population and thoracic surgery respectively. This study aims to determine the accuracy of these scores in predicting the risk of developing cardiac complications including atrial arrhythmias after lung resection surgery in adults.
Methods:
We studied 703 patients undergoing lung resection surgery in a tertiary thoracic surgery centre. Observed outcome measures of postoperative cardiac morbidity and mortality were compared against those predicted by risk.
Results:
Postoperative major cardiac complications and supraventricular arrhythmias occurred in 4.8% of patients. Both index scores had poor discriminative ability for predicting postoperative cardiac complications with an area under receiver operating characteristic (ROC) curve of 0.59 (95% CI 0.51-0.67) for the RCRI score and 0.57 (95% CI 0.49-0.66) for the ThRCRI score.
Conclusions:
In our cohort, RCRI and ThRCRI scores failed to accurately predict the risk of cardiac complications in patients undergoing elective resection of lung cancer. The British Thoracic Society (BTS) recommendation to seek a cardiology referral for all asymptomatic pre-operative lung resection patients with > 3 RCRI risk factors is thus unlikely to be of clinical benefit
Different types of Corrective exercises on Correction of Hyper Lumbarlordosis in Females- A Narrative Review
Back ground: Lumbar curvature carries the upper body weight and transfers it directly to the pelvis, which is of great significance. The structures in the lumbar region are one of the factors that affect the lumbar-pelvic balance, as well as the performance of lumbar lordosis and pelvic tilt. Also, weakness of abdominal, dorsal, and lumbar muscles has been considered as the most common factors increasing the lumbar curvature. Excessive lordotic curvature is also called hyperlordosis, hollow back, saddle back and swayback. Common cause of excessive lordosis includes tight low back muscle, excessive visceral fat, and pregnancy. Objectives: This review aimed at finding and analysing different forms of corrective exercises to correct hyper lumbar lordosis in females. Methods: The search was performed through online for English language articles. The databases used were Google ‘corrective exercises’ and ‘lumbar lordosis’. The scientific literature related to physiotherapy management for lumbar lordosis published from 1997 to 2021 was searched. Screening of the reference lists of all the retrieved articles was also done. Through online database search 100 articles were reviewed, 19 articles were included in this study based on predetermined inclusion criteria. Inclusion criteria: Based on gender (only female’s participant’s studies). Participants: Studies had included individuals with hyper lumbar lordosis with pain or without low back pain. Conclusion: 16 articles show Corrective exercises plays a main role in reduction of lumbar lordotic curve and functional disability. With the help of corrective exercises (abdominal muscle strengthening exercises, gluteal strengthening exercises hip flexor stretching exercises, back stretching exercises) can maintain the core stability of spinal extensors and flexors, due to this, spinal curvature can be corrected indirectly and the posture is corrected and the spasm get released, pain will be subsided, finally the quality of life will be improved
An SO(10) Grand Unified Theory of Flavor
We present a supersymmetric SO(10) grand unified theory (GUT) of flavor based
on an family symmetry. It makes use of our recent proposal to use SO(10)
with type II seesaw mechanism for neutrino masses combined with a simple ansatz
that the dominant Yukawa matrix (the {\bf 10}-Higgs coupling to matter) has
rank one. In this paper, we show how the rank one model can arise within some
plausible assumptions as an effective field theory from vectorlike {\bf 16}
dimensional matter fields with masses above the GUT scale. In order to obtain
the desired fermion flavor texture we use flavon multiplets which acquire
vevs in the ground state of the theory. By supplementing the theory with
an additional discrete symmetry, we find that the flavon vacuum field
alignments take a discrete set of values provided some of the higher
dimensional couplings are small. Choosing a particular set of these vacuum
alignments appears to lead to an unified understanding of observed quark-lepton
flavor:
(i) the lepton mixing matrix that is dominantly tri-bi-maximal with small
corrections related to quark mixings; (ii) quark lepton mass relations at GUT
scale: and and (iii) the solar to
atmospheric neutrino mass ratio in agreement with observations. The model predicts the neutrino
mixing parameter, ,
which should be observable in planned long baseline experiments.Comment: Final version of the paper as it will appear in JHEP
Accidental stability of dark matter
We propose that dark matter is stable as a consequence of an accidental Z2
that results from a flavour-symmetry group which is the double-cover group of
the symmetry group of one of the regular geometric solids. Although
model-dependent, the phenomenology resembles that of a generic Higgs portal
dark matter scheme.Comment: 12 pages, final version, published in JHE
A Dynamic Model of Interactions of Ca^(2+), Calmodulin, and Catalytic Subunits of Ca^(2+)/Calmodulin-Dependent Protein Kinase II
During the acquisition of memories, influx of Ca^(2+) into the postsynaptic spine through the pores of activated N-methyl-D-aspartate-type glutamate receptors triggers processes that change the strength of excitatory synapses. The pattern of Ca^(2+) influx during the first few seconds of activity is interpreted within the Ca^(2+)-dependent signaling network such that synaptic strength is eventually either potentiated or depressed. Many of the critical signaling enzymes that control synaptic plasticity, including Ca^(2+)/calmodulin-dependent protein kinase II (CaMKII), are regulated by calmodulin, a small protein that can bind up to 4 Ca^(2+) ions. As a first step toward clarifying how the Ca^(2+)-signaling network decides between potentiation or depression, we have created a kinetic model of the interactions of Ca^(2+), calmodulin, and CaMKII that represents our best understanding of the dynamics of these interactions under conditions that resemble those in a postsynaptic spine. We constrained parameters of the model from data in the literature, or from our own measurements, and then predicted time courses of activation and autophosphorylation of CaMKII under a variety of conditions. Simulations showed that species of calmodulin with fewer than four bound Ca^(2+) play a significant role in activation of CaMKII in the physiological regime, supporting the notion that processing ofCa^(2+) signals in a spine involves competition among target enzymes for binding to unsaturated species of CaM in an environment in which the concentration of Ca^(2+) is fluctuating rapidly. Indeed, we showed that dependence of activation on the frequency of Ca^(2+) transients arises from the kinetics of interaction of fluctuating Ca^(2+) with calmodulin/CaMKII complexes. We used parameter sensitivity analysis to identify which parameters will be most beneficial to measure more carefully to improve the accuracy of predictions. This model provides a quantitative base from which to build more complex dynamic models of postsynaptic signal transduction during learning
Improved Weighted Random Forest for Classification Problems
Several studies have shown that combining machine learning models in an
appropriate way will introduce improvements in the individual predictions made
by the base models. The key to make well-performing ensemble model is in the
diversity of the base models. Of the most common solutions for introducing
diversity into the decision trees are bagging and random forest. Bagging
enhances the diversity by sampling with replacement and generating many
training data sets, while random forest adds selecting a random number of
features as well. This has made the random forest a winning candidate for many
machine learning applications. However, assuming equal weights for all base
decision trees does not seem reasonable as the randomization of sampling and
input feature selection may lead to different levels of decision-making
abilities across base decision trees. Therefore, we propose several algorithms
that intend to modify the weighting strategy of regular random forest and
consequently make better predictions. The designed weighting frameworks include
optimal weighted random forest based on ac-curacy, optimal weighted random
forest based on the area under the curve (AUC), performance-based weighted
random forest, and several stacking-based weighted random forest models. The
numerical results show that the proposed models are able to introduce
significant improvements compared to regular random forest
- …