504 research outputs found

    Art and Medicine: from anatomic studies to Visual Thinking Strategies

    Get PDF
    Over the centuries the collaboration between artists and doctors and the relationship between art and medicine disciplines have been documented. Since the '60s the discipline of medical humanities has been developed in order to enrich the studies in medical sciences with the humanities. In the belief that medicine is more than just a set of knowledge and technical skills, medical educators have considered important to include the humanities as art, literature, philosophy, ethics, history, in the curriculum of training a good doctor. Despite there are examples of previous use of art as part of the curricula of medicine as a tool to develop the cognitive skills of observation and description, there is a general consensus that the semiotic competence starts from a correct and deep observation, "clinical eye", using senses to diagnose disease. It can talk about "Visual Thinking Strategy" (VTS) in this context. The VTS provides a way to enable the observation of the work of art, the process of analysis, comparison and discussion with others that allows the medical student to acquire a method to be applied also in clinical activity, improving skills in patient examination, by implementing the problem solving and critical thinking, getting used to teamwork, stimulating empathy toward patient and respect for others (whether patient or colleague). The observation practice should be key thing for medical training and this theory can be an aid to improve clinical skills. A trial of VTS for medicine students connected to Semiotic Course in collaboration with the Galleria Borghese in Rome during last academic year was carried out at The Degree Course in Medicine of the Faculty of Medicine and Psychology of Sapienza University.Keywords:Medical humanities, art, Visual Thinking StrategiesIntroductionArt understood as 'Tèchne' could be described as the application of rules set and experiences elaborated by man, therefore the knowledge, in order to make objects or to depict images taken from reality or fantasy world. Medical Science is a discipline defined as Art in so far as it able to apply the knowledge, therefore the Science of the disease cure. Over the centuries these disciplines have developed many relationships, in fact, we have document of the cooperation between artists and doctors. Let’s think , in Classical Antiquity, when artists could learn anatomical features only through the observation of athletes in gymnasium. Anatomical features, still unknown to the doctors, who could not use, for example, the dissection of corpse because this practice was prohibited for religious reasons. They were able to “admire” the representation of muscles stretching through sculpture, an example is Myron's Discobolus.In Medicine field only Herophilus of Chalcedon and Erasistratus, in the third century B.C. carried out dissections of "live" human bodies (1) until 1241 when Federico II promulgated the edict that authorized and stimulated the use of cadavers by doctors. In 1316 Mondino de Liuzzi wrote "Anothomia" founding the first School of Human Anatomy in Europe. It will have to wait the Renaissance, with the birth of the "modern" medicine, to discover that also the artists were able to use the human bodies for their anatomical studies. The first known example was Il Pollaiolo www.sensesandsciences.co

    B(s)0B^0_{(s)}-mixing matrix elements from lattice QCD for the Standard Model and beyond

    Get PDF
    We calculate---for the first time in three-flavor lattice QCD---the hadronic matrix elements of all five local operators that contribute to neutral B0B^0- and BsB_s-meson mixing in and beyond the Standard Model. We present a complete error budget for each matrix element and also provide the full set of correlations among the matrix elements. We also present the corresponding bag parameters and their correlations, as well as specific combinations of the mixing matrix elements that enter the expression for the neutral BB-meson width difference. We obtain the most precise determination to date of the SU(3)-breaking ratio ξ=1.206(18)(6)\xi = 1.206(18)(6), where the second error stems from the omission of charm sea quarks, while the first encompasses all other uncertainties. The threefold reduction in total uncertainty, relative to the 2013 Flavor Lattice Averaging Group results, tightens the constraint from BB mixing on the Cabibbo-Kobayashi-Maskawa (CKM) unitarity triangle. Our calculation employs gauge-field ensembles generated by the MILC Collaboration with four lattice spacings and pion masses close to the physical value. We use the asqtad-improved staggered action for the light valence quarks, and the Fermilab method for the bottom quark. We use heavy-light meson chiral perturbation theory modified to include lattice-spacing effects to extrapolate the five matrix elements to the physical point. We combine our results with experimental measurements of the neutral BB-meson oscillation frequencies to determine the CKM matrix elements Vtd=8.00(34)(8)×103|V_{td}| = 8.00(34)(8) \times 10^{-3}, Vts=39.0(1.2)(0.4)×103|V_{ts}| = 39.0(1.2)(0.4) \times 10^{-3}, and Vtd/Vts=0.2052(31)(10)|V_{td}/V_{ts}| = 0.2052(31)(10), which differ from CKM-unitarity expectations by about 2σ\sigma. These results and others from flavor-changing-neutral currents point towards an emerging tension between weak processes that are mediated at the loop and tree levels.Comment: 75 pp, 17 figs. Ver 2 fixes typos; corrects mistakes resulting in slight changes to results, correlation matrices; updates decay constants to agree with recent PDG update; corrects uncertainties for tree-level CKM matrix elements used in comparison, slightly reducing tensions; includes additional analyses that support mostly-nonperturbative matching; expands discussion of isospin-breaking effect

    Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites:TL-LUE Parameterization and Validation

    Get PDF
    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems, and it is more robust with regard to usual biases in input data than existing approaches which neglect the bimodal within-canopy distribution of PAR

    B-(s)(0)-mixing matrix elements from lattice QCD for the Standard Model

    Get PDF
    We calculate-for the first time in three-flavor lattice QCD-the hadronic matrix elements of all five local operators that contribute to neutral B-0- and B-s-meson mixing in and beyond the Standard Model. We present a complete error budget for each matrix element and also provide the full set of correlations among the matrix elements. We also present the corresponding bag parameters and their correlations, as well as specific combinations of the mixing matrix elements that enter the expression for the neutral B-meson width difference. We obtain the most precise determination to date of the SU(3)-breaking ratio xi = 1.206(18)(6), where the second error stems from the omission of charm-sea quarks, while the first encompasses all other uncertainties. The threefold reduction in total uncertainty, relative to the 2013 Flavor Lattice Averaging Group results, tightens the constraint from B mixing on the Cabibbo-Kobayashi-Maskawa (CKM) unitarity triangle. Our calculation employs gauge-field ensembles generated by the MILC Collaboration with four lattice spacings and pion masses close to the physical value. We use the asqtad-improved staggered action for the light-valence quarks and the Fermilab method for the bottom quark. We use heavy-light meson chiral perturbation theory modified to include lattice-spacing effects to extrapolate the five matrix elements to the physical point. We combine our results with experimental measurements of the neutral B-meson oscillation frequencies to determine the CKM matrix elements vertical bar V-td vertical bar = 8.00(34)(8) x 10(-3), vertical bar V-ts vertical bar = 39.0(1.2)(0.4) x 10(-3), and vertical bar V-td/V-ts vertical bar = 0.2052(31)(10), which differ from CKM-unitarity expectations by about 2 sigma. These results and others from flavor-changing-neutral currents point towards an emerging tension between weak processes that are mediated at the loop and tree levels

    Tight Bounds on the Round Complexity of the Distributed Maximum Coverage Problem

    Full text link
    We study the maximum kk-set coverage problem in the following distributed setting. A collection of sets S1,,SmS_1,\ldots,S_m over a universe [n][n] is partitioned across pp machines and the goal is to find kk sets whose union covers the most number of elements. The computation proceeds in synchronous rounds. In each round, all machines simultaneously send a message to a central coordinator who then communicates back to all machines a summary to guide the computation for the next round. At the end, the coordinator outputs the answer. The main measures of efficiency in this setting are the approximation ratio of the returned solution, the communication cost of each machine, and the number of rounds of computation. Our main result is an asymptotically tight bound on the tradeoff between these measures for the distributed maximum coverage problem. We first show that any rr-round protocol for this problem either incurs a communication cost of kmΩ(1/r) k \cdot m^{\Omega(1/r)} or only achieves an approximation factor of kΩ(1/r)k^{\Omega(1/r)}. This implies that any protocol that simultaneously achieves good approximation ratio (O(1)O(1) approximation) and good communication cost (O~(n)\widetilde{O}(n) communication per machine), essentially requires logarithmic (in kk) number of rounds. We complement our lower bound result by showing that there exist an rr-round protocol that achieves an ee1\frac{e}{e-1}-approximation (essentially best possible) with a communication cost of kmO(1/r)k \cdot m^{O(1/r)} as well as an rr-round protocol that achieves a kO(1/r)k^{O(1/r)}-approximation with only O~(n)\widetilde{O}(n) communication per each machine (essentially best possible). We further use our results in this distributed setting to obtain new bounds for the maximum coverage problem in two other main models of computation for massive datasets, namely, the dynamic streaming model and the MapReduce model
    corecore