11 research outputs found

    Measurement of Inclusive and DiJet D*-Meson Photoproduction at the H1 Experiment at HERA

    Get PDF
    In der vorliegenden Arbeit wird der Produktionsmechanismus von Charm-Quarks in Elektron-Proton-Streuungen am Speicherring HERA untersucht. Der analysierte Datansatz entspricht Luminositaeten von 30.68 pb^-1, 68.23 pb^-1 und 93.39 pb^-1. Der Nachweis von Ereignissen mit Charm-Quarks erfolgt durch die Rekonstruktion von D*-Mesonen im kinematischen Bereich der Photoproduktion. D*-Mesonen werden erstmals mit Hilfe der dritten Stufe des Fast-Track-Triggers des H1-Experiments selektiert. Hierdurch konnte der Phasenraum im Vergleich zur vorangegangenen Messung entscheidend erweitert und die Statistik um einen Faktor acht erhoeht werden. Der untersuchte kinematische Bereich erstreckt sich ueber eine Photonvirtualitaet von Q^2 4 GeV bzw. p_t > 3 GeV im Bereich der Pseudorapiditaet |eta| < 1.5 untersucht. Hierbei wird verlangt, dass einer der selektierten Jets mit dem D*-Meson assoziiert ist. Die Rekonstruktion von zwei harten Partonen ermoeglicht einen tieferen Einblick in den Produktionsmechanismus der Charm-Quarks. Diese Messung zeigt, dass Prozesse mit aufgeloesten Photonen im untersuchten Phasenraum eine entscheidene Rolle bei der Photoproduktion von Charm-Quarks spielen. Einfach- und doppeltdifferentielle Wirkungsquerschnitte beider Ereignismengen werden mit Vorhersagen der pertubativer QCD in fuehrender und naechsthoeherer Ordnung verglichen

    The Simulation of the ATLAS Liquid Argon Calorimetry

    Get PDF
    In ATLAS, all of the electromagnetic calorimetry and part of the hadronic calorimetry is performed by a calorimeter system using liquid argon as the active material, together with various types of absorbers. The liquid argon calorimeter consists of four subsystems: the electromagnetic barrel and endcap accordion calorimeters; the hadronic endcap calorimeters, and the forward calorimeters. A very accurate geometrical description of these calorimeters is used as input to the Geant 4-based ATLAS simulation, and a careful modelling of the signal development is applied in the generation of hits. Certain types of Monte Carlo truth information ("Calibration Hits") may, additionally, be recorded for calorimeter cells as well as for dead material. This note is a comprehensive reference describing the simulation of the four liquid argon calorimeteter components

    Likelihood Based Inference and Diagnostics for Spatial Data Models

    Get PDF

    A HIERARCHICAL FRAMEWORK FOR STATISTICAL MODEL VALIDATION OF ENGINEERED SYSTEMS

    Get PDF
    As the role of computational models has increased, the accuracy of computational results has been of great concern to engineering decision-makers. To address a growing concern about the predictive capability of the computational models, this dissertation proposed a generic model validation framework with four research objectives as: Objective 1 &mdash to develop a hierarchical framework for statistical model validation that is applicable to various computational models of engineered products (or systems); Objective 2 &mdash to advance a model calibration technique that can facilitate to improve predictive capability of computational models in a statistical manner; Objective 3 &mdash to build a validity check engine of a computational model with limited experimental data; and Objective 4 &mdash to demonstrate the feasibility and effectiveness of the proposed validation framework with five engineering problems requiring different experimental resources and predictive computational models: (a) cellular phone, (b) tire tread block, (c) thermal challenge problem, (d) constrained-layer damping structure and (e) energy harvesting device. The validation framework consists of three activities: validation planning (top-down), validation execution (bottom-up) and virtual qualification. The validation planning activity requires knowledge about physics-of-failure (PoF) mechanisms and/or system performances of interest. The knowledge facilitates to decompose an engineered system into subsystems and/or components such that PoF mechanisms or system performances of interest can be decomposed accordingly. The validation planning activity takes a top-down approach and identifies vital tests and predictive computational models of which contain both known and unknown model input variable(s). On the other hand, the validation execution activity takes a bottom-up approach, which improves the predictive capability of the computational models from the lowest level to the highest using the statistical calibration technique. This technique compares experimental results with predicted ones from the computational model to determine the best statistical distributions of unknown random variables while maximizing the likelihood function. As the predictive capability of a computational model at a lower hierarchical level is improved, this enhanced model can be fused into the model at a higher hierarchical level. The validation execution activity is then continued for the model at the higher hierarchical level. After the statistical model calibration, a validity of the calibrated model should be assessed; therefore, a hypothesis test for validity check method was developed to measure and evaluate the degree of mismatch between predicted and observed results while considering the uncertainty caused by limited experimental data. Should the model become valid, the virtual qualification can be executed in a statistical sense for new product developments. With five case studies, this dissertation demonstrates that the validation framework is applicable to diverse classes of engineering problems for improving the predictive capability of the computational models, assessing the fidelity of the computational models, and assisting rational decision making on new design alternatives in the product development process

    Modelling election poll data using time series analysis

    Get PDF
    There is much interest in election forecasting in the UK. On election night, fore­casts are made and revised as the night progresses and seats declare results. We propose a new time series model which may be used in this context. Firstly, we have statistical models for the polls conducted in a run-up to the election; the model produces the distribution of voting amongst the parties. The key here is the use of modelling the probability of voting each poll as latent variables. Secondly, we use this information in the forecasting of the inevitable outcome, continually revising our forecasts as the actual declarations are made, until we can actually determine what we believe the final outcome to be, before it actually happens. We outline the nature and history of elections in the UK, and provide an account of time series analysis. These tools, as well as the theoretical basis of our method, the h-likelihood, are then applied to the creation of each of our models proposed. We study simulations of the models and then fit the models to actual data to assess forecasting accuracy, using existing models for comparison

    Sympathy for the devil:On the neural mechanisms of threat and distress reactivity

    Get PDF

    Behavioral and fMRI-based Characterization of Cognitive Processes Supporting Learning and Retrieval of Memory for Words in Young Adults

    Get PDF
    A novel word is rarely defined explicitly during the first encounter. With repeated exposure, a decontextualized meaning of the word is integrated into semantic memory. With the overarching goal of characterizing the functional neuroanatomy of semantic processing in young adults, we employed a contextual word learning paradigm, creating novel synonyms for common animal/artifact nouns that, along with additional real words, served as stimuli for the lexical-decision based functional MRI (fMRI) experiment. Young adults (n=28) were given two types of word learning training administered in multiple sessions spread out over three days. The first type of training provided perceptual form-only training to pseudoword (PW) stimuli using a PW-detection task. The second type of training assigned the meaning of common artifacts and animals to PWs using multiple sentences to allow contextual meaning acquisition, essentially creating novel synonyms. The underlying goals were twofold: 1) to test, using a behavioral semantic priming paradigm, the hypothesis that novel words acquired in adulthood get integrated into existing semantic networks (discussed in Chapter 2); and 2) to investigate the functional neuroanatomy of semantic processing in young adults, at the single word level, using the newly learned as well as previously known word stimuli as a conduit (discussed in Chapter 3). As outlined in Chapter 2, in addition to the semantic priming test mentioned above, two additional behavioral tests were administered to assess word learning success. The first was a semantic memory test using a two-alternative sentence completion task. Participants demonstrated robust accuracy (~87%) in choosing the appropriate meaning-trained item to complete a novel sentence. Second, an old/new item recognition test was administered using both meaning and form trained stimuli (old) as well as novel foil PWs (new). Participants demonstrated: a) high discriminability between trained and novel PW stimuli. (d-prime=2.72); and b)faster reaction times and higher accuracy for meaning-trained items relative to perceptually-trained items, consistent with prior level-of-processing research. The results from the recognition and semantic memory tests confirmed that subjects could explicitly recognize trained items as well as demonstrate knowledge of the newly acquired synonymous meanings. Finally, using a lexical decision task, a semantic priming test assessed semantic integration using the novel trained items as primes for word targets that had no prior episodic association to the primes. Relative to perceptually trained primes, meaning-trained primes significantly facilitated lexical decision latencies for synonymous word targets. Taken together, the behavioral findings outlined above demonstrate that a contextual approach is effective in facilitating word learning in young adults. Words learned over a few experimental sessions were successfully retained in declarative memory, as demonstrated by behavioral performance in the semantic memory and recognition memory experiments. In addition, relative to perceptually-trained PWs, the newly meaning-trained PWs, when used as primes in a semantic priming test, facilitated lexical decisions for synonymous real words, with which the primes had no prior episodic association. The latter finding confirms our primary behavioral hypothesis that novel words acquired in adulthood are represented similarly, i.e. integrated in the same semantic memory representational network, as common words likely acquired early in the lifetime. Chapter 3 outlines the findings from the fMRI experiment used to investigate the functional neuroanatomy of semantic processing using the newly learned as well as previously known words as stimuli in a lexical decision task. fMRI data were collected using a widely-spaced event-related design, allowing isolation of item-level hemodynamic responses. Two fMRI sessions were administered separated by 2-3 days, the 1st session conducted prior to, and the 2nd session following word-learning training. Using the same items as stimuli in the fMRI sessions conducted before and after behavioral training, facilitated a within-item analysis where each item effectively served as its own control. A set of stringent criteria, outlined below, were established a-priori describing characteristics expected from regions with a role in retrieving/processing meanings at the single word level. We expected a putative semantic processing region to exhibit: a) higher BOLD activity during the 1st fMRI session for real words relative to novel PWs; b) reduced BOLD activity for repeated real words presented in the 2nd fMRI session relative to levels seen in the 1st fMRI session; c) higher BOLD activity for meaning-trained PWs relative to novel PWs; d) higher BOLD activity for meaning-trained PWs relative to perceptually-trained PWs, e) higher BOLD activity for correctly identified meaning-trained PWs (hits) relative to their incorrect counterparts (misses). Given their previously documented associations with semantic processing, we expected to identify regions in left middle temporal gyrus (MTG) and left ventral inferior frontal gyrus (vIFG) to exhibit timecourses consistent with most of the semantic criteria outlined above. Individual ANOVA contrasts, essentially targeting each of the criteria outlined above, were conducted at the voxelwise level. A fixed effects analysis based on 4 correct trial ANOVA contrasts (corresponding to criteria a-d, above) generated 81 regions of interest; and two individual error vs. correct trial ANOVA contrasts generated an additional 16 regions, for a total of 97 study-driven regions. Using region-level ANOVAs and qualitative timecourse examinations, the regions were probed for the presence of the effects outlined in the above criteria. To ensure a comprehensive analysis, additional regions were garnered from prior studies that have used a variety of tasks to target semantic processing. The literature-derived regions were subjected to similar ANOVAs and qualitative timecourse analysis as was conducted on the study-driven regions to examine if the regions exhibited effects outlined in the above criteria. The above analysis resulted in three principal observations. First, we identified regions in the left parahippocamal gyrus (PHG) and left medial superior frontal cortex (mSFC) that, by satisfying essentially all the above criteria, demonstrated a role in semantic memory retrieval for recently acquired and previously known words. Second, despite strong expectations, regions in the left MTG and left vIFG failed to show activity in support of a role in semantic retrieval for the novel words. On the contrary, the profiles seen in the two said regions, namely a ‘word \u3e novel PW’ and a word repetition suppression effect, were consistent with a role in semantic retrieval exclusively for the previously known words. The latter observation suggests that the novel words have yet to undergo adequate consolidation to engage, in addition to PHG and mSFC, canonical semantic regions such as left MTG. Third, despite the potentially crucial distinctions noted in Chapter 3, left lateral/medial parietal regions implicated in episodic memory retrieval exhibited many similar properties as those outlined for PHG and mSFC above during retrieval of newly learned words. Crucially, instead of exhibiting repetition suppression for real words, as observed in PHG/mSFC, the parietal regions showed the opposite effect resembling the episodic ‘old\u3enew’ retrieval success effect. The latter observation argues against a sematic role and in support of an episodic role consistent with previous literature. Taken together, these observations suggest that in addition to the role played by PHG/mSFC supporting semantic memory retrieval for the novel words, the parietal regions are also making significant contributions for memory retrieval of the novel words via complementary episodic processes. Finally, using item-level timecourses derived from the 97 study-driven ROI, clustering algorithms were used to group regions with similar characteristics, with the goal of identifying a cluster corresponding to a putative semantic brain system. A number of clusters were identified containing regions with anatomical and functional correspondence to previously well-characterized systems. For instance, a cluster containing regions in left lateral parietal cortex, precuneus, and superior frontal cortex corresponding to a previously described episodic memory retrieval system (Nelson et al., 2010) was identified. Two additional clusters, corresponding to frontoparietal and cinguloopercular task control systems (Dosenbach et al., 2006, 2007) were also among the identified clusters. However, the clustering analysis did not identify a cluster of regions with semantic properties, such as PHG and mSFC noted above, that could potentially correspond with a semantic brain system. The above outlined findings from the current study, juxtaposed with prior findings from the literature, were interpreted in the following manner. The two regions identified in the current study, i.e. left parahippocampal gyrus and medial superior frontal gyrus, constitute regions that are used for learning new words, and are also recruited during semantic retrieval of previously well-established meanings. In addition, the current results also suggest complementary episodic contributions to the word learning process from regions in left parietal/superior frontal cortex. The latter observation may imply strong episodic contributions to the observed behavioral semantic priming effects. A potential counter argument, i.e. in support of a semantic basis for the priming effects, is the shared recruitment, in a manner consistent with semantics, of PHG/mSFC by both novel and real word stimuli. The left middle temporal gyrus, a region that the task-evoked and neuropsychological literature consistently associates with word-level semantic processing, was not recruited during memory retrieval of novel words, despite robust engagement by previously known word stimuli. Given their association with category-selective semantic deficits, as well as their role in conceptual/perceptual processing in healthy brains, the memory consolidation literature proposes regions in the lateral temporal lobes as potential neocortical loci for consolidated long-term memory. In the current setting, it is likely the case that the novel words have yet to be adequately consolidated to engage left MTG as did the previously known words. Finally, the left vIFG exhibited similar characteristics as the left middle temporal gyrus, in that it was not recruited by the newly meaning trained stimuli, despite showing engagement by previously known words. Given that the region failed to appear in our primary contrasts, even those targeting real word stimuli, and its absence in other prior studies that have used similar lexical decision tasks as the current study, we have a slightly different interpretation for that region. The left vIFG is typically recruited in task settings that require controlled/strategic meaning retrieval, a process that may not be critical for adequate performance of the lexical decision task as employed in the current study. Taken together, these findings suggest that a relatively small amount of word learning training is sufficient to create novel words that, in young adults, behaviorally resemble the semantic characteristics of well-known words. On the other hand, the fMRI findings, particularly the failure of the newly meaning-trained items to engage regions that are canonically responsive to single word meanings (e.g. middle temporal gyrus), may suggest a more protracted timecourse for the functional signature of novel words to resemble that of well-known words. That said, the fMRI findings identified brain regions (left PHG/mSFC) that, consistent with the memory consolidation literature, serve as the functional neuroanatomical “bridge” that connects the novel words to the eventual functional representational destination

    Mesure de l'angle gamma de la matrice CKM dans les désintégrations B⁰->DK*⁰ en utilisant la méthode de Dalitz dans l'expérience LHCb au CERN et optimisation de la reconstruction des photons pour l'upgrade du détecteur LHCb

    Get PDF
    Quark mixing is described in the standard model of particle physics with the Cabibbo-Kobayashi-Maskawa mecanism. The angle gamma of the unitarity triangle is one of the parameters of this mecanism that is still determined with a large uncertainty. It can be measured without significant contribution of new physics, making it a standard model key measurement. The current precision of the best direct measurement of gamma is approximately 10°, whereas the global fits of the CKM parameters determine this angle up to a few degrees. Therefore precise measurement of this quantity is needed to further constrain the Unitarity Triangle of the CKM matrix, and check the consistency of the theory. This thesis reports a measurement of gamma with a Dalitz analysis of the B0->DK*0 channel where the D meson decays into K0Spipi, based on the 3 fb⁻¹ of proton-proton collision data collected by LHCb during the LHC Run I, at the centre-of-mass energy of 7 and 8 TeV. This channel is sensitive to gamma through the interference between the b->u and b->c transitions. The CP violation observables are measured to be x- = -0.09 ^{+0.13}_{-0.13} ± 0.09 ± 0.01 , x+ = -0.10 ^{+0.27}_{-0.26} ± 0.06 ± 0.01 , y- = 0.23 ^{+0.15}_{-0.16} ± 0.04 ± 0.01 , y+ = -0.74 ^{+0.23}_{-0.26} ± 0.07 ± 0.01 , where the first uncertainty is statistical, the second is the experimental systematic uncertainty and the third is the systematic uncertainty due to the Dalitz model. A frequentist interpretation of these observables leads to rB0 = 0.39±0.13 , deltaB0 = ( 186^{+24}_{-23} )°, gamma = ( 77^{+23}_{-24} )° , where rB0 is the magnitude of the ratio between the suppressed and favoured decays and deltaB0 the strong phase difference between these two decays. In addition, the work performed on the optimisation of the photon reconstruction for the upgraded LHCb detector is reported. During LHC Run III, the LHCb instantaneous luminosity will be increased by a factor five, implying a larger shower overlap in the electromagnetic calorimeter. The study shows that reducing the cluster size used in the photon reconstruction limits the effect of the overlap between the showers, without inducing a significant energy leakage. With some dedicated corrections, the new cluster reconstruction improves the Bs->Phi gamma mass resolution by 7 to 12%, depending on the calorimeter region.Le mélange des quarks est décrit dans le modèle standard de la physique des particules par le mécanisme de Cabibbo-Kobayashi-Maskawa (CKM). À ce jour, l'angle gamma du triangle d'unitarité est un des paramètres de ce mécanisme mesuré avec la moins bonne précision. La mesure de cet angle sert de référence pour le modèle standard, puisqu'elle peut être réalisée sans contribution significative de nouvelle physique. La précision actuelle de la meilleure mesure directe de gamma est d'environ 10°, alors que les ajustements globaux des paramètres CKM, potentiellement sujets à une contribution de nouvelle physique, déterminent cet angle à quelques degrés près. Par conséquent, une mesure directe précise de cette quantité est nécessaire pour contraindre d'avantage le triangle d'unitarité de la matrice CKM et ainsi tester la cohérence de ce modèle. Cette thèse présente une mesure de gamma par une analyse de Dalitz du canal B0->DK*0, avec une désintégration du méson D en K0Spipi. Elle est basée sur les 3 fb⁻¹ de données enregistrés par LHCb pendant le Run I du LHC, à une énergie de collision proton-proton dans le centre de masse de 7 et 8 TeV. Ce canal est sensible à gamma par l'interférence entre les transitions b->u et b->c. La mesure des observables de violation de CP réalisée est x- = -0.09 ^{+0.13}_{-0.13} ± 0.09 ± 0.01 , x+ = -0.10 ^{+0.27}_{-0.26} ± 0.06 ± 0.01 , y- = 0.23 ^{+0.15}_{-0.16} ± 0.04 ± 0.01 , y+ = -0.74 ^{+0.23}_{-0.26} ± 0.07 ± 0.01 , où le première incertitude est statistique, la deuxième est l'incertitude systématique expérimentale et la troisième est l'incertitude systématique venant du modèle de Dalitz. Une interprétation fréquentiste de ces observables donne rB0 = 0.39 ± 0.13 , deltaB0 = ( 186^{+24}_{-23} )° , gamma = ( 77^{+23}_{-24})° , où rB0 est le module du rapport des amplitudes des désintégrations supprimées et favorisées et deltaB0 la différence de phase forte entre ces deux désintégrations. Par ailleurs, un travail sur l'optimisation de la reconstruction des photons pour la mise à niveau du détecteur LHCb est aussi présenté. Lors du Run III du LHC, la luminosité instantanée reçue par LHCb sera augmentée d'un facteur cinq, générant un plus grand recouvrement entre les cascades se développant dans le calorimètre électromagnétique. L'étude montre que l'effet de ce recouvrement entre les gerbes est limité en réduisant la taille des clusters utilisés pour la détection des photons, tout en évitant une diminution significative de l'énergie reconstruite. Avec des corrections adaptées, la nouvelle reconstruction développée améliore la résolution en masse de 7 à 12%, suivant la région du calorimètre considérée
    corecore