59 research outputs found
Learning image quality assessment by reinforcing task amenable data selection
In this paper, we consider a type of image quality assessment as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject those images that lead to poor accuracy in the target task. In this work, we show that the controller-predicted image quality can be significantly different from the task-specific image quality labels that are manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective image quality assessment without using a ``clean'' validation set, thereby avoiding the requirement for human labelling of images with respect to their amenability for the task. Using , labelled and segmented, clinical ultrasound images from patients, experimental results on holdout data show that the proposed image quality assessment achieved a mean classification accuracy of and a mean segmentation Dice of , by discarding and of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective and from networks without considering task amenability. This enables image quality feedback during real-time ultrasound acquisition among many other medical imaging applications
Adaptable image quality assessment using meta-reinforcement learning of task amenability
The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7 %
%
and 29.6 %
%
expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100 %
% expert labels
Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set
We report a measurement of the bottom-strange meson mixing phase \beta_s
using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays
in which the quark-flavor content of the bottom-strange meson is identified at
production. This measurement uses the full data set of proton-antiproton
collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment
at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity.
We report confidence regions in the two-dimensional space of \beta_s and the
B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2,
-1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in
agreement with the standard model expectation. Assuming the standard model
value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +-
0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +-
0.009 (syst) ps, which are consistent and competitive with determinations by
other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012
Multi-messenger observations of a binary neutron star merger
On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
Vitalism in Early Modern Medical and Philosophical Thought
Vitalism is a notoriously deceptive term. It is very often defined as the view, in biology, in early modern medicine and differently, in early modern philosophy, that living beings differ from the rest of the physical universe due to their possessing an additional ‘life-force’, ‘vital principle’, ‘entelechy’, enormon or élan vital. Such definitions most often have an explicit pejorative dimension: vitalism is a primitive or archaic view, that has somehow survived the emergence of modern science (the latter being defined in many different ways, from demystified Cartesian reductionism to experimental medicine, biochemistry or genetics: Cimino and Duchesneau eds. 1997, Normandin and Wolfe eds. 2013). Such dismissive definitions of vitalism are meant to dispense with argument or analysis.
Curiously, the term has gained some popularity in English-language scholarship on early modern philosophy in the past few decades, where it is used without any pejorative dimension, to refer to a kind of ‘active matter’ view, in which matter is not reducible to the (mechanistic) properties of size, shape and motion, possessing instead some internal dynamism or activity (see e.g. James 1999, Boyle 2018, Borcherding forthcoming). The latter meaning is close to what the Cambridge Platonist Ralph Cudworth termed ‘hylozoism’, namely the attribution of life, agency or mind to matter, and he implicitly targeted several figures I shall mention here, notably Margaret Cavendish and Francis Glisson, for holding this view. However, one point I shall make in this entry is that when vitalism first appears by name, and as a self-designation, in the Montpellier School (associated with the Faculty of Medicine at the University of Montpellier, in the second half of the eighteenth century; thus vitalisme appears first, followed shortly thereafter by Vitalismus in German, with ‘vitalism’ appearing in English publications only in the early nineteenth century: Toepfer 2011), it is quite different from both the more ‘supernatural’ view described above – chiefly espoused by its rather obsessive opponents – and from the more neutral, but also de-biologized philosophical view (that of e.g. Cavendish or Conway who are, broadly speaking naturalists). Rather than appealing to a metaphysics of vital force, or of self-organizing matter, this version of vitalism, which I shall refer to as ‘medical vitalism’, seems to be more of a ‘systemic’ theory: an attempt to grasp and describe top-level (‘organizational’, ‘organismic’, ‘holistic’) features of living systems (Wolfe 2017, 2019).
In this entry I seek to introduce some periodization in our thinking about early modern (and Enlightenment) vitalism, emphasizing the difference between the seventeenth-century context and that of the following generations – culminating in the ideas of the Montpellier School. This periodization should also function as a kind of taxonomy or at least distinction between some basic types of vitalism. As I discuss in closing, these distinctions can cut across the texts and figures we are dealing with, differently: metaphysical vs. non-metaphysical vitalism, philosophical vs. medical vitalism, medical vs. ‘embryological’ vitalism, and so on. A difference I can only mention but not explore in detail is that the more medically grounded, ‘organismic’ vitalism is significantly post-Cartesian while the more biological/embryological vitalism is, inasmuch as it is a dynamic, self-organizing matter theory, an extension of Renaissance ideas (chymiatry, Galenism and in general theories of medical spirits).
I examine successively vitalism’s Renaissance prehistory, its proliferation as ‘vital matter theory’ in seventeenth-century England (in authors such as Cavendish, Conway and Glisson, with brief considerations on Harvey and van Helmont), and its mature expression in eighteenth-century Montpellier (notably with Bordeu and Ménuret de Chambaud)
Measurement of the forward-backward asymmetry in the B→K(*) μ\u3csup\u3e+\u3c/sup\u3eμ\u3csup\u3e-\u3c/sup\u3e decay and first observation of the Bs0→μ\u3csup\u3e+\u3c/sup\u3eμ\u3csup\u3e-\u3c/sup\u3e decay
We reconstruct the rare decays B+→K+μ +μ-, B0→K*(892)0μ +μ-, and Bs0→(1020)μ+μ - in a data sample corresponding to 4.4fb-1 collected in pp̄ collisions at √s=1.96TeV by the CDF II detector at the Tevatron Collider. Using 121±16 B+→K+μ +μ- and 101±12 B0→K*0μ +μ- decays we report the branching ratios. In addition, we report the differential branching ratio and the muon forward-backward asymmetry in the B+ and B0 decay modes, and the K*0 longitudinal polarization fraction in the B0 decay mode with respect to the squared dimuon mass. These are consistent with the predictions, and most recent determinations from other experiments and of comparable accuracy. We also report the first observation of the Bs0→μ+μ- decay and measure its branching ratio BR(Bs0→μ+μ-)= [1.44±0.33±0.46]×10-6 using 27±6 signal events. This is currently the most rare Bs0 decay observed. © 2011 American Physical Society
Search for a new heavy gauge boson W′ with event signature electron+missing transverse energy in pp̅ collisions at √s=1.96 TeV
We present a search for a new heavy charged vector boson W′ decaying to an electron-neutrino pair in pp̅ collisions at a center-of-mass energy of 1.96 TeV. The data were collected with the CDF II detector and correspond to an integrated luminosity of 5.3 fb-1. No significant excess above the standard model expectation is observed and we set upper limits on σ·B(W′→eν). Assuming standard model couplings to fermions and the neutrino from the W′ boson decay to be light, we exclude a W′ boson with mass less than 1.12 TeV/c2 at the 95% confidence level.We thank the Fermilab staff and the technical staffs of
the participating institutions for their vital contributions.
This work was supported by the U.S. Department of Energy
and National Science Foundation; the Italian Istituto
Nazionale di Fisica Nucleare; the Ministry of Education,
Culture, Sports, Science and Technology of Japan; the
Natural Sciences and Engineering Research Council of
Canada; the National Science Council of the Republic of
China; the Swiss National Science Foundation; the A. P.
Sloan Foundation; the Bundesministerium für Balduin
Una Forschung, Germany; the World Class University
Program, the National Research Foundation of Korea;
the Science and Technology Facilities Council and the
Royal Society, United Kingdom; the Institut National
de Physique Nucleaire et Physique des Particules/CNRS
and Universite Pierre et Marie Curie; the Russian
Foundation for Basic Research; the Ministerio de Ciencia
e Innovación, and Programa Consolider-Ingenio 2010,
Spain; the Slovak R&D Agency; and the Academy of
Finland
Search for New Dielectron Resonances and Randall-Sundrum Gravitons at the Collider Detector at Fermilab
A search for new dielectron-mass resonances using data recorded by the CDF II detector and corresponding to an integrated luminosity of 5.7fb-1 is presented. No significant excess over the expected standard model prediction is observed. In this data set, an event with the highest dielectron mass ever observed (960GeV/c2) was recorded. The results are interpreted in the Randall-Sundrum (RS) model. Combined with the 5.4fb-1 diphoton analysis, the RS-graviton lower-mass limit for the coupling k/M ̄Pl=0.1 is 1058GeV/c2, making it the strongest limit to date. © 2011 American Physical Society
Image quality assessment for machine learning tasks using meta-reinforcement learning
In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images
- …
