339 research outputs found
Worldwide food recall patterns over an eleven month period: A country perspective.
<p>Abstract</p> <p>Background</p> <p>Following the World Health Organization Forum in November 2007, the Beijing Declaration recognized the importance of food safety along with the rights of all individuals to a safe and adequate diet. The aim of this study is to retrospectively analyze the patterns in food alert and recall by countries to identify the principal hazard generators and gatekeepers of food safety in the eleven months leading up to the Declaration.</p> <p>Methods</p> <p>The food recall data set was collected by the Laboratory of the Government Chemist (LGC, UK) over the period from January to November 2007. Statistics were computed with the focus reporting patterns by the 117 countries. The complexity of the recorded interrelations was depicted as a network constructed from structural properties contained in the data. The analysed network properties included degrees, weighted degrees, modularity and <it>k</it>-core decomposition. Network analyses of the reports, based on 'country making report' (<it>detector</it>) and 'country reported on' (<it>transgressor</it>), revealed that the network is organized around a dominant core.</p> <p>Results</p> <p>Ten countries were reported for sixty per cent of all faulty products marketed, with the top 5 countries having received between 100 to 281 reports. Further analysis of the dominant core revealed that out of the top five transgressors three made no reports (in the order China > Turkey > Iran). The top ten detectors account for three quarters of reports with three > 300 (Italy: 406, Germany: 340, United Kingdom: 322).</p> <p>Conclusion</p> <p>Of the 117 countries studied, the vast majority of food reports are made by 10 countries, with EU countries predominating. The majority of the faulty foodstuffs originate in ten countries with four major producers making no reports. This pattern is very distant from that proposed by the Beijing Declaration which urges all countries to take responsibility for the provision of safe and adequate diets for their nationals.</p
Computer-aided diagnosis for (123I)FP-CIT imaging: impact on clinical reporting
BACKGROUND: For (123I)FP-CIT imaging, a number of algorithms have shown high performance in distinguishing normal patient images from those with disease, but none have yet been tested as part of reporting workflows. This study aims to evaluate the impact on reporters' performance of a computer-aided diagnosis (CADx) tool developed from established machine learning technology. Three experienced (123I)FP-CIT reporters (two radiologists and one clinical scientist) were asked to visually score 155 reconstructed clinical and research images on a 5-point diagnostic confidence scale (read 1). Once completed, the process was then repeated (read 2). Immediately after submitting each image score for a second time, the CADx system output was displayed to reporters alongside the image data. With this information available, the reporters submitted a score for the third time (read 3). Comparisons between reads 1 and 2 provided evidence of intra-operator reliability, and differences between reads 2 and 3 showed the impact of the CADx. RESULTS: The performance of all reporters demonstrated a degree of variability when analysing images through visual analysis alone. However, inclusion of CADx improved consistency between reporters, for both clinical and research data. The introduction of CADx increased the accuracy of the radiologists when reporting (unfamiliar) research images but had less impact on the clinical scientist and caused no significant change in accuracy for the clinical data. CONCLUSIONS: The outcomes for this study indicate the value of CADx as a diagnostic aid in the clinic and encourage future development for more refined incorporation into clinical practice
Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?
Background
Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified.
This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson’s Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features:
Voxel intensities
Principal components of image voxel intensities
Striatal binding radios from the putamen and caudate.
Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods:
Minimum of age-matched controls
Mean minus 1/1.5/2 standard deviations from age-matched controls
Linear regression of normal patient data against age (minus 1/1.5/2 standard errors)
Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data
Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times.
Results
The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson’s disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively.
Conclusions
Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context
Long-term vitamin E supplementation fails to reduce lipid peroxidation in people at cardiovascular risk: analysis of underlying factors
BACKGROUND: Antioxidant supplementation with vitamin E had no effect in the prevention of cardiovascular diseases (CVD) in three recent large, randomized clinical trials. In order to reassess critically the role of vitamin E in CVD prevention, it is important to establish whether these results are related to a lack of antioxidant action. METHODS: We examined the in vivo antioxidant effect of vitamin E (300 mg/day for about three years) in 144 participants in the Primary Prevention Project (females and males, aged ≥ 50 y, with at least one major CV risk factor, but no history of CVD). Urinary 8-epi-PGF(2α) (isoprostane F(2α)-III or 15-F(2t)-isoP), a validated biomarker of lipid peroxidation, was measured by mass spectrometry. RESULTS: Urinary excretion of 8-epi-PGF(2α) [pg/mg creatinine, median (range)] was 141 (67–498) in treated and 148 (76–561) in untreated subjects (p = 0.10). Taking into account possible confounding variables, multiple regression analysis confirmed that vitamin E had no significant effect on this biomarker. Levels of 8-epi-PGF(2α) were in the normal range for most subjects, except smokers and those with uncontrolled blood pressure or hyperglycemia. CONCLUSIONS: Prolonged vitamin E supplementation did not reduce lipid peroxidation in subjects with major cardiovascular risk factors. The observation that the rate of lipid peroxidation was near normal in a large proportion of subjects may help explain why vitamin E was not effective as an antioxidant in the PPP study and was ineffective for CVD prevention in large scale trials
Dinâmica populacional de Bemisia tabaci biótipo B em tomate monocultivo e consorciado com coentro sob cultivo orgânico e convencional.
A mosca-branca Bemisia tabaci BiĂłtipo B (Hemiptera:
Aleyrodidae), Ă© um herbĂvoro de difĂcil controle devido Ă alta plasticidade genotĂpica da espĂ©cie. No tomateiro pode causar danos severos principalmente pela transmissĂŁo de diversas viroses. O manejo do sistema de produção e o consĂłrcio de culturas podem ter um efeito direto nas populações desse herbĂvoro, sem que seja necessária a aplicação de inseticidas. Avaliou-se a influĂŞncia dos sistemas de produção orgânico e convencional e o consĂłrcio tomate-coentro na dinâmica populacional da mosca-branca no campo experimental da Embrapa Hortaliças, de maio a setembro/06. O monitoramento dos adultos da mosca-branca e de seus inimigos naturais foi realizado utilizando-se armadilhas adesivas amarelas fixadas nas bordas e no interior das parcelas experimentais e a amostragem de ninfas foi realizada por observação direta das folhas de tomate no campo. Embora as populações ao redor dos diferentes tratamentos fossem equivalentes, a abundância de adultos de mosca-branca foi significativamente menor nas parcelas de tomate consorciado com coentro, tanto no sistema convencional como orgânico. Apenas o consĂłrcio tomatecoentro em sistema orgânico apresentou redução significativa na quantidade de ninfas por planta em relação aos demais tratamentos. Os inimigos naturais foram significativamente mais abundantes em sistema orgânico e foi verificada uma correlação negativa da abundância
dos inimigos naturais e a quantidade de ninfas por planta. A associação tomate-coentro e o manejo orgânico do agroecossistema favoreceram ao controle biológico natural da mosca-branca
Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set
We report a measurement of the bottom-strange meson mixing phase \beta_s
using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays
in which the quark-flavor content of the bottom-strange meson is identified at
production. This measurement uses the full data set of proton-antiproton
collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment
at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity.
We report confidence regions in the two-dimensional space of \beta_s and the
B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2,
-1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in
agreement with the standard model expectation. Assuming the standard model
value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +-
0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +-
0.009 (syst) ps, which are consistent and competitive with determinations by
other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012
Numerical analysis of different heating systems for warm sheet metal forming
The main goal of this study is to present an analysis
of different heating methods frequently used in laboratory
scale and in the industrial practice to heat blanks at warm
temperatures. In this context, the blank can be heated inside
the forming tools (internal method) or using a heating system
(external method). In order to perform this analysis, a finite
element model is firstly validated with the simulation of the
direct resistance system used in a Gleeble testing machine.
The predicted temperature was compared with the temperature
distribution recorded experimentally and a good agreement
was found. Afterwards, a finite element model is used to
predict the temperature distribution in the blank during the
heating process, when using different heating methods. The
analysis also includes the evaluation of a cooling phase associated
to the transport phase for the external heating methods.
The results of this analysis show that neglecting the heating
phase and a transport phase could lead to inaccuracies in the
simulation of the forming phase.The authors gratefully acknowledge the financial
support of the Portuguese Foundation for Science and Technology (FCT)
under project PTDC/EMS-TEC/1805/2012 and by FEDER funds
through the program COMPETE—Programa Operacional Factores de
Competitividade, under the project CENTRO-07-0224-FEDER-002001
(MT4MOBI). The authors would like to thank Prof. A. Andrade-Campos
for helpful contributions on the development of the finite element code
presented in this work.info:eu-repo/semantics/publishedVersio
Prevalence of anatomical variants and coronary anomalies in 543 consecutive patients studied with 64-slice CT coronary angiography
The aim of our study was to assess the prevalence of variants and anomalies of the coronary artery tree in patients who underwent 64-slice computed tomography coronary angiography (CT-CA) for suspected or known coronary artery disease. A total of 543 patients (389 male, mean age 60.5 ± 10.9) were reviewed for coronary artery variants and anomalies including post-processing tools. The majority of segments were identified according to the American Heart Association scheme. The coronary dominance pattern results were: right, 86.6%; left, 9.2%; balanced, 4.2%. The left main coronary artery had a mean length of 112 ± 55 mm. The intermediate branch was present in the 21.9%. A variable number of diagonals (one, 25%; two, 49.7%; more than two, 24%; none, 1.3%) and marginals (one, 35.2%; two, 46.2%; more than two, 18%; none, 0.6%) was visualized. Furthermore, CT-CA may visualize smaller branches such as the conus branch artery (98%), the sinus node artery (91.6%), and the septal branches (93%). Single or associated coronary anomalies occurred in 18.4% of the patients, with the following distribution: 43 anomalies of origin and course, 68 intrinsic anomalies (59 myocardial bridging, nine aneurisms), three fistulas. In conclusion, 64-slice CT-CA provides optimal visualization of the variable and complex anatomy of coronary arteries because of the improved isotropic spatial resolution and flexible post-processing tool
- …