104 research outputs found
Learning from failure
We study decentralized learning in organizations. Decentralization is captured through a symmetry constraint on agentsâ strategies. Among such attainable strategies, we solve for optimal and equilibrium strategies. We model the organization as a repeated game with imperfectly observable actions. A fixed but unknown subset of action profiles are successes and all other action profiles are failures. The game is played until either there is a success or the time horizon is reached. For any time horizon, including infinity, we demonstrate existence of optimal attainable strategies and show that they are Nash equilibria. For some time horizons, we can solve explicitly for the optimal attainable strategies and show uniqueness. The solution connects the learning behavior of agents to the fundamentals that characterize the organization: Agents in the organization respond more slowly to failure as the future becomes more important, the size of the organization increases and the probability of success decreases.Game theory
Automated multiclass segmentation, quantification, and visualization of the diseased aorta on hybrid PET/CTâSEQUOIA
Background
Cardiovascular disease is the most common cause of death worldwide, including infection and inflammation related conditions. Multiple studies have demonstrated potential advantages of hybrid positron emission tomography combined with computed tomography (PET/CT) as an adjunct to current clinical inflammatory and infectious biochemical markers. To quantitatively analyze vascular diseases at PET/CT, robust segmentation of the aorta is necessary. However, manual segmentation is extremely time-consuming and labor-intensive.
Purpose
To investigate the feasibility and accuracy of an automated tool to segment and quantify multiple parts of the diseased aorta on unenhanced low-dose computed tomography (LDCT) as an anatomical reference for PET-assessed vascular disease.
Methods
A software pipeline was developed including automated segmentation using a 3D U-Net, calcium scoring, PET uptake quantification, background measurement, radiomics feature extraction, and 2D surface visualization of vessel wall calcium and tracer uptake distribution. To train the 3D U-Net, 352 non-contrast LDCTs from (2-[18F]FDG and Na[18F]F) PET/CTs performed in patients with various vascular pathologies with manual segmentation of the ascending aorta, aortic arch, descending aorta, and abdominal aorta were used. The last 22 consecutive scans were used as a hold-out internal test set. The remaining dataset was randomly split into training (n = 264; 80%) and validation (n = 66; 20%) sets. Further evaluation was performed on an external test set of 49 PET/CTs. The dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess segmentation performance. Automatically obtained calcium scores and uptake values were compared with manual scoring obtained using clinical softwares (syngo.via and Affinity Viewer) in six patient images. intraclass correlation coefficients (ICC) were calculated to validate calcium and uptake values.
Results
Fully automated segmentation of the aorta using a 3D U-Net was feasible in LDCT obtained from PET/CT scans. The external test set yielded a DSC of 0.867 ± 0.030 and HD of 1.0 [0.6â1.4] mm, similar to an open-source model with a DSC of 0.864 ± 0.023 and HD of 1.4 [1.0â1.8] mm. Quantification of calcium and uptake values were in excellent agreement with clinical software (ICC: 1.00 [1.00â1.00] and 0.99 [0.93â1.00] for calcium and uptake values, respectively).
Conclusions
We present an automated pipeline to segment the ascending aorta, aortic arch, descending aorta, and abdominal aorta on LDCT from PET/CT and to accurately provide uptake values, calcium scores, background measurement, radiomics features, and a 2D visualization. We call this algorithm SEQUOIA (SEgmentation, QUantification, and visualizatiOn of the dIseased Aorta) and is available at https://github.com/UMCG-CVI/SEQUOIA. This model could augment the utility of aortic evaluation at PET/CT studies tremendously, irrespective of the tracer, and potentially provide fast and reliable quantification of cardiovascular diseases in clinical practice, both for primary diagnosis and disease monitoring
flavour tagging using charm decays at the LHCb experiment
An algorithm is described for tagging the flavour content at production of
neutral mesons in the LHCb experiment. The algorithm exploits the
correlation of the flavour of a meson with the charge of a reconstructed
secondary charm hadron from the decay of the other hadron produced in the
proton-proton collision. Charm hadron candidates are identified in a number of
fully or partially reconstructed Cabibbo-favoured decay modes. The algorithm is
calibrated on the self-tagged decay modes and using of data collected by the LHCb
experiment at centre-of-mass energies of and
. Its tagging power on these samples of
decays is .Comment: All figures and tables, along with any supplementary material and
additional information, are available at
http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2015-027.htm
Emprego do cell block de agarose como método complementar no diagnóstico citológico de tumores mamårios caninos
Identification of beauty and charm quark jets at LHCb
Identification of jets originating from beauty and charm quarks is important
for measuring Standard Model processes and for searching for new physics. The
performance of algorithms developed to select - and -quark jets is
measured using data recorded by LHCb from proton-proton collisions at
TeV in 2011 and at TeV in 2012. The efficiency for
identifying a jet is about 65%(25%) with a probability for
misidentifying a light-parton jet of 0.3% for jets with transverse momentum
GeV and pseudorapidity . The dependence of
the performance on the and of the jet is also measured
Observation of the B0 â Ï0Ï0 decay from an amplitude analysis of B0 â (Ï+Ïâ)(Ï+Ïâ) decays
Protonâproton collision data recorded in 2011 and 2012 by the LHCb experiment, corresponding to
an integrated luminosity of 3.0 fbâ1, are analysed to search for the charmless B0 â Ï0Ï0 decay.
More than 600 B0 â (Ï+Ïâ)(Ï+Ïâ) signal decays are selected and used to perform an amplitude
analysis, under the assumption of no CP violation in the decay, from which the B0 â Ï0Ï0 decay is
observed for the first time with 7.1 standard deviations significance. The fraction of B0 â Ï0Ï0 decays
yielding a longitudinally polarised final state is measured to be fL = 0.745+0.048
â0.058(stat) ± 0.034(syst).
The B0 â Ï0Ï0 branching fraction, using the B0 â ÏKâ(892)0 decay as reference, is also reported as
B(B0 â Ï0Ï0) = (0.94 ± 0.17(stat) ± 0.09(syst) ± 0.06(BF)) Ă 10â6
Observation of the decay B0s â Ï(2S)K +Ïâ
The decay B0
s â Ï(2S)K +Ïâ is observed using a data set corresponding to an integrated luminosity of
3.0 fbâ1 collected by the LHCb experiment in pp collisions at centre-of-mass energies of 7 and 8 TeV.
The branching fraction relative to the B0 â Ï(2S)K +Ïâ decay mode is measured to be
B(B0
s â Ï(2S)K +Ïâ)
B(B0 â Ï(2S)K +Ïâ) = 5.38 ± 0.36 (stat) ± 0.22 (syst) ± 0.31 (f s/ fd)%,
where f s/ fd indicates the uncertainty due to the ratio of probabilities for a b quark to hadronise into
a B0
s or B0 meson. Using an amplitude analysis, the fraction of decays proceeding via an intermediate
Kâ(892)0 meson is measured to be 0.645 ± 0.049 (stat) ± 0.049 (syst) and its longitudinal polarisation
fraction is 0.524 ± 0.056 (stat) ± 0.029 (syst). The relative branching fraction for this component is
determined to be
B(B0
s â Ï(2S)Kâ(892)0)
B(B0 â Ï(2S)Kâ(892)0) = 5.58 ± 0.57 (stat) ± 0.40 (syst) ± 0.32 (f s/ fd)%.
In addition, the mass splitting between the B0
s and B0 mesons is measured as
M(B0
s ) â M(B0) = 87.45 ± 0.44 (stat) ± 0.09 (syst) MeV/c2
Measurement of the CP-violating phase ÎČ in B0 â J/ÏÏ+Ïâ decays and limits on penguin effects
Time-dependent CP violation is measured in the (â)
B 0 â J/ÏÏ+Ïâ channel for each Ï+Ïâ resonant
final state using data collected with an integrated luminosity of 3.0 fbâ1 in pp collisions using the LHCb
detector. The final state with the largest rate, J/ÏÏ0(770), is used to measure the CP-violating angle
2ÎČeff to be (41.7 ± 9.6+2.8
â6.3)âŠ. This result can be used to limit the size of penguin amplitude contributions
to CP violation measurements in, for example, (â)
B 0
s â J/ÏÏ decays. Assuming approximate SU(3) flavour
symmetry and neglecting higher order diagrams, the shift in the CP-violating phase Ïs is limited to be
within the interval [â1.05âŠ,+1.18âŠ] at 95% confidence level. Changes to the limit due to SU(3) symmetry
breaking effects are also discussed
LHCb detector performance
The LHCb detector is a forward spectrometer at the Large Hadron Collider (LHC) at CERN. The experiment is designed for precision measurements of CP violation and rare decays of beauty and charm hadrons. In this paper the performance of the various LHCb sub-detectors and the trigger system are described, using data taken from 2010 to 2012. It is shown that the design criteria of the experiment have been met. The excellent performance of the detector has allowed the LHCb collaboration to publish a wide range of physics results, demonstrating LHCb's unique role, both as a heavy flavour experiment and as a general purpose detector in the forward region
Search for CP violation in D-0 -> pi(-)pi(+)pi(0) decays with the energy test
A search for time-integrated CP violation in the Cabibbo-suppressed decay
is performed using for the first time an unbinned
model-independent technique known as the energy test. Using proton-proton
collision data, corresponding to an integrated luminosity of 2.0 fb
collected by the LHCb detector at a centre-of-mass energy of = 8
TeV, the world's best sensitivity to CP violation in this decay is obtained.
The data are found to be consistent with the hypothesis of CP symmetry with a
p-value of (2.6 +/- 0.5)%
- âŠ