11 research outputs found

    Deep Virtual Pion Pair Production

    Get PDF
    This experiment investigates the deep virtual production of both σ− and ρ− mesons, with a particular focus on the microscopic structure of the σ mesons. While the ρ meson is an ordinary qqÂŻ pair, the σ meson is composed of not only the typical qqÂŻ pair, making it a topic of controversy for nearly six decades. Although the existence of the σ− meson is now well established, its microscopic structure remains poorly understood. The primary objective of this thesis is to contribute to the understanding of the σ meson by analyzing its deep virtual production. The main focus of this study was on the ep → e â€Čp â€Čπ +π − reaction, which is a crucial process for investigating both the σ− and ρ− mesons. Specifically, this reaction is sensitive to the pure glue component of the σ− meson’s wave function near the threshold in the ππ− system. In order to separate the σ− and ρ− meson channels, we analyzed the angular distribution in the ππ rest frame. By focusing on this reaction and employing this technique, we aimed to gain a better understanding of the structure of both the σ− and ρ− mesons. The model has developed according to Lehmann-Dronke to understand the σ− and ρ− mesons separately. To conduct our experiment, we chose to use the data from the Hall B CLAS12 “Run Group A” with an electron beam energy of 10.6 GeV incident on the LH2 target. The CLAS12 detector in Hall B has a large acceptance, making it an ideal choice for our study. By using this data, we were able to obtain accurate and reliable measurements of the ep → e â€Čp â€Čπ +π − reaction and further our understanding of the σ− and ρ− mesons

    Study of inelastic processes in proton-proton collisions at the LHC with the TOTEM experiment

    Get PDF
    The TOTEM experiment, located into the CMS cavern at the CERN Large Hadron Collider (LHC), is one of the six experiments that are investigating high energy physics at this new machine. In particular TOTEM has been designed for TOTal cross-section, Elastic scattering and diffraction dissociation Measurements. The total proton-proton cross-section will be measured with the luminosity-independent method based on the Optical Theorem. This method will allow a precision of 1Ă·2% at the center of mass energy of 14 TeV. In order to reach such a small error it is necessary to study the p-p elastic scattering cross-section ( dσ/dt ) down to |t| ∌ 10^−3 GeV^2 (to evaluate at best the extrapolation to t = 0) and, at the same time, to measure the total inelastic interaction rate. For this aim, elastically scattered protons must be detected at very small angles with respect to the beam while having the largest possible η coverage for particle detection in order to reduce losses of inelastic events. In addition, TOTEM will also perform studies on elastic scattering with large momentum transfer and a comprehensive physics programme on diffractive processes (partly in cooperation with CMS), in order to have a deeper understanding of the proton structure. For these purposes TOTEM consists in three different sub-detectors: two gas based telescopes (T1 and T2) for the detection of inelastic processes with a coveragein the range of 3.1 ≀ |η| ≀ 6.5 on both sides of the interaction point 5 (IP5), and silicon based detectors for the elastically scattered protons, located in special movable beampipe insertions called Roman Pots (RPs), at about 147 m and 220 m from the interaction point. The work done by the candidate reported in this thesis mainly consists in three subjects: the tuning of the simulation for the T2 inelastic telescope, the study of the noise of the T2 detector and a preliminary study concerning the detection performance for inelastic events. In the following, the first chapter describes the TOTEM experiment and the LHC machine, with a particular attention to the T2 telescope and its analysis software, being of critical importance for the work of this thesis. The second chapter introduces the physics programme of the TOTEM experiment. Chapter three describes the tuning of Geant4 parameters and the improvement of the simulated geometry for the T2 detector, while chapter four summarizes an important and demanding study on the detector noise. Finally in chapter five some preliminary studies on inelastic processes are presented, in order to show the perspective for the TOTEM experiment to perform the measurement of the inelastic cross section in a wide kinematic range

    Different fuzzy control configurations tuned by the Bees Algorithm for LFC of two-area power system

    Get PDF
    This study develops and implements a design of the Fuzzy Proportional Integral Derivative with filtered derivative mode (Fuzzy PIDF) for Load Frequency Control (LFC) of a two-area interconnected power system. To attain the optimal values of the proposed structure’s parameters which guarantee the best possible performance, the Bees Algorithm (BA) and other optimisation tools are used to accomplish this task. A Step Load Perturbation (SLP) of 0.2 pu is applied in area one to examine the dynamic performance of the system with the proposed controller employed as the LFC system. The supremacy of Fuzzy PIDF is proven by comparing the results with those of previous studies for the same power system. As the designed controller is required to provide reliable performance, this study is further extended to propose three different fuzzy control configurations that offer higher reliability, namely Fuzzy Cascade PI − PD, Fuzzy PI plus Fuzzy PD, and Fuzzy (PI + PD), optimized by the BA for the LFC for the same dual-area power system. Moreover, an extensive examination of the robustness of these structures towards the parametric uncertainties of the investigated power system, considering thirteen cases, is carried out. The simulation results indicate that the contribution of the BA tuned the proposed fuzzy control structures in alleviating the overshoot, undershoot, and the settling time of the frequency in both areas and the tie-line power oscillations. Based on the obtained results, it is revealed that the lowest drop of the frequency in area one is −0.0414 Hz, which is achieved by the proposed Fuzzy PIDF tuned by the BA. It is also divulged that the proposed techniques, as was evidenced by their performance, offer a good transient response, a considerable capability for disturbance rejection, and an insensitivity towards the parametric uncertainty of the controlled system

    Photon polarization in Bs0â†’Ï†Îł decays at the LHCb experiment

    Get PDF
    El Modelo EstĂĄndar (SM) es la descripciĂłn mas precisa de las interacciones entre partĂ­culas fundamentales que existe hasta la fecha. Experimentalmente se han realizado observaciones que coinciden con sus predicciones hasta la precisiĂłn disponible. Sin embargo, el SM presenta carencias para describir fenĂłmenos conocidos como son la jerarquĂ­a de masas de las distintas familias de partĂ­culas elementales o la bariogĂ©nesis. Para solventarlas existe una plĂ©tora de distintas hipĂłtesis formalizadas en modelos teĂłricos, como pueden ser la supersimetrĂ­a (SUSY) u otros. Estos modelos incluyen nuevos componentes cuya observaciĂłn marcarĂ­a el camino a seguir en la experimentaciĂłn en fĂ­sica de partĂ­culas. Es a travĂ©s de la mediciĂłn de los observables relativos a procesos con loops que se puede sondear la posibilidad de modelos mas allĂĄ del SM. El trabajo de investigaciĂłn se realiza en este contexto, en concreto en el estudio de la desintegraciĂłn radiativa Bs0 → Ï†Îł. El enfoque de este anĂĄlisis es obtener una medida de la polarizaciĂłn del fotĂłn emitido en el canal Bs0 → Ï†Îł a travĂ©s de una medida precisa de la vida media del mesĂłn madre Bs0. Dicha polarizaciĂłn es predominantemente levĂłgira en el SM. La ventaja de trabajar con mesones Bs0 sobre utilizar mesones B0 se hace evidente cuando se compara la diferencia de anchuras de sus autoestados de masa. El anĂĄlisis se basa en un mĂ©todo que no discrimina entre los mesones Bs0 y sus antipartĂ­culas. En este modo se puede escribir la tasa de desintegraciĂłn en funciĂłn de un parĂĄmetro AΔ que contiene la informaciĂłn de la proporciĂłn de fotones levĂłgiros y dextrĂłgiros emitidos. Se utiliza el canal B0→K*Îł como control, dada la similitud entre la cinemĂĄtica de ambos modos con el objetivo de reducir errores sistemĂĄticos y otros asociados a la funciĂłn de aceptancia temporal; de vital importancia para el anĂĄlisis, ya que el parĂĄmetro AΔ es especialmente sensible a la parametrizaciĂłn. El cociente de la distribuciĂłn de desintegraciones en el tiempo de ambos canales es ajustado a la forma funcional esperada, dejando libre el parĂĄmetro de interĂ©s AΔ. Durante 2011 y 2012, el experimento LHCb colecciono 3 fb^-1 de datos a energĂ­as de dentro de masas de 7 y 8 TeV, respectivamente. Estos datos son usados en el anĂĄlisis tras un proceso de selecciĂłn, para el cual se ha participado en el diseño y mantenimiento de los algoritmos de selecciĂłn de LHCb conocidos como stripping. Tras la selecciĂłn, se realiza un ajuste a las distribuciones de masa tanto del canal de señal como el de control para identificar las fuentes de fondo remanentes en cada uno y poder sustraerlas, permitiendo la obtenciĂłn de una distribuciĂłn de tiempo de desintegraciĂłn pura para cada canal. Tras la selecciĂłn y el ajuste de masas, se obtienen 4200 sucesos de Bs0â†’Ï†Îł y 25700 de B0→K*Îł para el ajuste de tiempo propio. La sustracciĂłn de fondos se realiza utilizando la tĂ©cnica de sPlot. Una vez purificadas las muestras, se categorizan en bins utilizando un algoritmo adaptativo para minimizar la dispersiĂłn en las colas exponenciales de las distribuciones antes de la divisiĂłn de los canales de señal y control. Este cociente se ajusta minimizando una figura de chi-cuadrado y se obtiene un valor de AΔ =-0.85[+0.43-0.46](stat)[+0.21-0.22](syst), compatible a dos desviaciones estandar con la prediccion del SM de AΔ=0.047[+0.029-0.025]

    Analysis into the decays of K+ -> pi+ mu+ mu- and Bc+ -> phi K+ at LHCb

    Get PDF
    This thesis outlines the contributions made by the author to the LHCbPR framework, part of the software validation and testing framework for the LHCb experiment at European Organization for Nuclear Research (CERN), and analyses into the rare decays of K±→π±Ό+Ό−K^{\pm}\rightarrow \pi^{\pm} \mu^+\mu^- and Bc±→ϕK±B_c^{\pm}\rightarrow \phi K^{\pm} with the LHCb detector. The testing of LHCb software during development is vital to ensuring an efficient and optimal dataflow. LHCbPR allows quick and easy monitoring of the effects of software changes on the system through the orchestrated execution of a set of pre-written tests the results of which are then displayed online. Three such tests, which monitor physics processes during the development of the simulation frameworks, have been migrated by the author from being offline user run scripts to becoming fully automated within LHCbPR. The decay of K±→π±Ό+Ό−K^{\pm}\rightarrow \pi^{\pm} \mu^+\mu^-, although having been observed previously by other experiments, is investigated within this thesis to determine the prospects of a first observation within a collider experiment, and for the purpose of looking into the prospects of performing a more precise measurement in the future. Analysis is performed making use of the 3.6 fb−1fb^{-1} collected from 13 TeV collisions at LHCb between 2015–2017, where additional improvements in triggering have been implemented to record events with lower pT such as those of rare kaon decays. A measurement for the branching ratio of the decay of B(K±→π±Ό+Ό−)=(6.3±2.6)×10−8B(K^{\pm}\rightarrow \pi^{\pm} \mu^+\mu^-) = (6.3\pm 2.6)\times 10^{-8} was recorded, compatible within 1σ1\sigma to the world average of (9.4±0.6)×10−8(9.4 \pm 0.6)\times 10^{-8}. The results, although not yet competitive, hint that with the predicted levels of improvement at LHCb in Run 3, the experiment could indeed contribute to the future of kaon decay measurement

    Predictive performance of front-loaded experimentation strategies in pharmaceutical discovery: a Bayesian perspective

    Get PDF
    Experimentation is a significant innovation process activity and its design is fundamental to the learning and knowledge build-up process. Front-loaded experimentation is known as a strategy seeking to improve innovation process performance; by exploiting early information to spot and solve problems as upstream as possible, costly overruns in subsequent product development are avoided. Although the value of search through front-loaded experimentation in complex and novel environments is recognized, the phenomenon has not been studied in the highly relevant pharmaceutical R&D context, where typically lots of drug candidates get killed very late in the innovation process when potential problems are insufficiently anticipated upfront. In pharmaceutical research the initial problem is to discover a “drug-like” complex biological or chemical system that has the potential to affect a biological target on a disease pathway. My case study evidence found that the discovery process is managed through a front-loaded experimentation strategy. The research team gradually builds a mental model of the drug’s action in which the solution of critical design problems can be initiated at various moments in the innovation process. The purpose of this research was to evaluate the predictive performance of frontloaded experimentation strategies in the discovery process. Because predictive performance necessitates conditional probability thinking, a Bayesian methodology is proposed and a rationale is given to develop research propositions using Monte Carlo simulation. An adaptive system paradigm, then, is the basis for designing the simulation model used for top-down theory development. My simulation results indicate that front-loaded strategies in a pharmaceutical discovery context outperform other strategies on positive predictive performance. Frontloaded strategies therefore increase the odds for compounds succeeding subsequent development testing, provided they were found positive in discovery. Also, increasing the number of parallel concept explorations in discovery influences significantly the negative predictive performance of experimentation strategies, reducing the probability of missed opportunities in development. These results are shown to be robust for varying degrees of predictability of the discovery process. The counterintuitive business implication of my research findings is that the key to further reduce spend and overruns in pharmaceutical development is to be found in discovery, where efforts to better understand drug candidates lead to higher success rates later in the innovation process

    Spinful Algorithmization of High Energy Diffraction

    Get PDF
    High energy diffraction probes fundamental interactions, the vacuum, and quantum mechanically coherent matter waves at asymptotic energies. In this work, we algorithmize our abstract ideas and develop a set of rigid rules for diffraction. To get spin under control, we construct a new Monte Carlo simulation engine, GRANIITTI. It is the first event generator with custom spin-dependent scattering amplitudes for the glueball domain semi-exclusive diffraction, driven by fully multithreaded importance sampling and written in C++. Our simulations provide new computational evidence that the enigmatic glueball filter observable is a spin polarization filter for tensor resonances. For algorithmic spin studies, we automate the classic Laplace spherical harmonics inverse expansion, carefully define the geometric acceptance related phase space issues and study the harmonic mixing properties systematically in different Lorentz frames. To improve the big picture, we generalize the standard soft diffraction observables and definitions by developing a high dimensional probabilistic framework based on incidence algebras, Combinatorial Superstatistics, and solve also a new superposition inverse problem using the Möbius inversion theorem. For inverting stochastic autoconvolution integral equations or `inverting the proton', we develop a novel recursive inverse algorithm based on the Fast Fourier Transform and relative entropy minimization. The first algorithmic inverse results of the proton double multiplicity structure and multiparton interaction rates are obtained using the published LHC data, in agreement with standard phenomenology. For optimal inversion of the detector efficiency response, we build the first Deep Learning based solution working in higher phase space dimensions, DeepEfficiency, which inverts the detector response on an event-by-event basis and minimizes the event generator dependence. Using the ALICE experiment proton-proton data at the LHC at 13 TeV, we obtain the first unfolded fiducial measurement of the multidimensional combinatorial partial cross sections, the first multidimensional maximum likelihood fit of the effective soft pomeron intercept and the first multidimensional maximum likelihood fit of the single, double and non-diffractive component cross sections. Great care is taken with the fiducial and non-fiducial definitions. The second topic of measurements centers on semi-exclusive central diffractive production of hadron pairs, which we study with the ALICE data. We measure and fit the resonance spectra of identified pion and kaon pairs, which is crucial on the road towards solving the mysteries of glueballs, the proton structure fluctuations, and the pomeron.Suurenergiadiffraktio heijastelee luonnon perusvuorovaikutuksia, tyhjiötĂ€ ja kvanttimekaanisesti koherentteja aaltoja asymptoottisen suurilla energioilla. TĂ€ssĂ€ työssĂ€ teen abstrakteista ideoista algoritmeja ja kehitĂ€n joukon tĂ€smĂ€llisiĂ€ sÀÀntöjĂ€ suurenergiadiffraktiolle. Jotta spin ja kulmaliikemÀÀrĂ€ saadaan haltuun, rakennan uuden avoimen lĂ€hdekoodin Monte Carlo -simulaatiokoneiston nimeltÀÀn GRANIITTI. Se on ensimmĂ€inen törmĂ€ysgeneraattori, joka kykenee mallintamaan kattavasti spin-riippuvia relativistisia sironta-amplitudeja keskeisdiffraktion prosesseissa. NĂ€itĂ€ hiukkassimulaatioita tarvitaan esimerkiksi CERN:in LHC-kiihdyttimellĂ€ tehtĂ€vissĂ€ kokeissa, joissa keskitytÀÀn niin kutsuttujen ”liimapallojen” (eng. glueballs) löytĂ€miseen. Liimapallot ovat vahvan vuorovaikutuksen gluonihiukkasten muodostamia resonoivia kvanttitiloja, joilla on teoreettinen yhteys suurenergiadiffraktioon, mutta joita ei ole saatu kokeellisesti vielĂ€ yksikĂ€sitteisesti havaittua. TĂ€mĂ€ johtuu niiden monikomponenttiliiman kaltaisesta kvanttitilasta, jossa mukana voi olla myös kvarkkeja. Simulaatioiden avulla löydĂ€n uutta laskennallista todistetta sille, ettĂ€ liimapallosuotimena tunnettua arvoituksellista observaabelia ajaa resonanssien spin-polarisaatiotiheys. Suurena tavoitteena on kehittÀÀ suurenergiadiffraktion kokonaiskuvaa. TĂ€tĂ€ varten esittelen uuden matemaattisen koneiston perustuen todennĂ€köisyyslaskentaan ja kombinatorisiin insidenssialgebroihin. Kutsun tĂ€tĂ€ kombinatoriseksi superstatistiikaksi. NĂ€in saadaan mÀÀriteltyĂ€ ja ratkaistua heti uusi inversio-ongelma jo tunnetun Möbius-inversiolauseen avulla. Jatkan inversio-ongelmien saralla ja nĂ€ytĂ€n ensimmĂ€isenĂ€ kuinka protoni-protoni -törmĂ€yksissĂ€ syntyneiden varattujen hiukkasten todennĂ€köisyysjakauma voidaan algoritmillisesti uudelleenorganisoida. NĂ€in saadaan uusia nĂ€kökulmia suurenergiaprotonien monimutkaiseen rakenteeseen ja dynamiikkaan, jossa useat partonit protonin sisĂ€ltĂ€ törmÀÀvĂ€t samanaikaisesti. LHC-datan avulla saadut algoritmilliset tulokset ovat samansuuntaisia aiempien mallipohjaisten tulkintojen kanssa. KehitĂ€n myös ensimmĂ€isenĂ€ algoritmin, joka syvĂ€korjaa lĂ€hes optimaalisesti mittauslaitteiston tehokkuusvasteen moniulotteisessa liikemÀÀrĂ€avaruudessa törmĂ€ys törmĂ€ykseltĂ€. TĂ€mĂ€ algoritmi perustuu syviin neuroverkkoihin. Työn kokeellisessa osuudessa hyödynnetÀÀn työssĂ€ kehitettyjĂ€ menetelmiĂ€ ja algoritmeja LHC:n ALICE-kokeessa, kĂ€yttĂ€en LHC-kiihdyttimen tuottamaa protoni-protoni -törmĂ€ysdataa 13 TeV:n massakeskipiste-energialla. NĂ€in tehdÀÀn ensimmĂ€inen kokonaisvaltainen pehmeiden törmĂ€ysten moniulotteinen fidusiaalimittaus ja ensimmĂ€inen moniulotteinen diffraktiivisten vaikutusalojen suurimman uskottavuuden fidusiaalianalyysi. LisĂ€ksi analysoidaan diffraktiofenomenologian oleellisia parametreja. Toinen mittausten aihe on keskeisdiffraktio. TyössĂ€ mitataan hadroniset resonanssispektrit, joiden tutkiminen vie meidĂ€t kohti liimapallojen, protonin fluktuoivan sisĂ€rakenteen ja pomeronin salaisuuksien ratkaisuja

    Jet-Like Correlations in 200GeV Au+Au Collisions

    Get PDF
    Ultrarelativistic heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) allow for a novel environment in which to study the fundamental interaction between quarks and gluons, known as the nuclear strong force. The strong force ordinarily confines quarks and gluons (“partons”) to the interior of composite particles such as protons and neutrons (``hadrons”), but, in heavy-ion collisions, energy densities become sufficiently high that hadrons effectively melt into a plasma of free partons known as a Quark Gluon Plasma (QGP). Quantifying the properties of the QGP, such as its bulk viscosity, temperature, and entropy-to-shear-viscosity ratio have become key endeavors of high energy nuclear physics in the past two decades. Additionally, quantification of the strength of the color field (the strong interaction analogy of the electromagnetic field) and how energy permeates through the plasma itself is necessary. This dissertation focuses on the latter set of objectives through the use of an experimental observable known as “jets”, which are collimated sprays of particles. Additionally, jets in heavy-ion collisions have been found to have both their momentum and their shape modified relative to jets found in proton-proton collisions, where there is no QGP formation. This jet modification occurs because the parent partons of the jets have themselves been modified by the interaction with the color field inside the QGP. Thus, studying jet modification allows us to quantify properties of the QGP itself. This dissertation presents the results of examining angular correlations between jet fragments and high momentum neutral pions (π0\pi^0). Correlating jet fragments to a high momentum π0\pi^0 allows for high statistical precision, as neutral pions are one of the most abundant particles created in heavy-ion collisions. A high momentum (pT≄12p_T \geq 12~GeV/c) π0\pi^0 can also carry up to 80%\% of a single jet’s momentum, making them very good proxies for the jet itself kinematically. This work will utilize the largest heavy-ion data set available from the PHENIX detector, collected during 2014, containing approximately 20 billion Au+Au events

    Predictive performance of front-loaded experimentation strategies in pharmaceutical discovery : a Bayesian perspective

    Get PDF
    Experimentation is a significant innovation process activity and its design is fundamental to the learning and knowledge build-up process. Front-loaded experimentation is known as a strategy seeking to improve innovation process performance; by exploiting early information to spot and solve problems as upstream as possible, costly overruns in subsequent product development are avoided. Although the value of search through front-loaded experimentation in complex and novel environments is recognized, the phenomenon has not been studied in the highly relevant pharmaceutical R&D context, where typically lots of drug candidates get killed very late in the innovation process when potential problems are insufficiently anticipated upfront. In pharmaceutical research the initial problem is to discover a “drug-like” complex biological or chemical system that has the potential to affect a biological target on a disease pathway. My case study evidence found that the discovery process is managed through a front-loaded experimentation strategy. The research team gradually builds a mental model of the drug’s action in which the solution of critical design problems can be initiated at various moments in the innovation process. The purpose of this research was to evaluate the predictive performance of frontloaded experimentation strategies in the discovery process. Because predictive performance necessitates conditional probability thinking, a Bayesian methodology is proposed and a rationale is given to develop research propositions using Monte Carlo simulation. An adaptive system paradigm, then, is the basis for designing the simulation model used for top-down theory development. My simulation results indicate that front-loaded strategies in a pharmaceutical discovery context outperform other strategies on positive predictive performance. Frontloaded strategies therefore increase the odds for compounds succeeding subsequent development testing, provided they were found positive in discovery. Also, increasing the number of parallel concept explorations in discovery influences significantly the negative predictive performance of experimentation strategies, reducing the probability of missed opportunities in development. These results are shown to be robust for varying degrees of predictability of the discovery process. The counterintuitive business implication of my research findings is that the key to further reduce spend and overruns in pharmaceutical development is to be found in discovery, where efforts to better understand drug candidates lead to higher success rates later in the innovation process.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore