355 research outputs found
Comprehensive study of the background for the Pixel Vertex Detector at Belle II
The highly successful Belle experiment was located at the KEKB accelerator in Tsukuba, Japan. KEKB was an electron-positron ring accelerator running at the asymmetric energies of 8 GeV (e-) and 3.5 GeV (e+). The Belle experiment took data from 1999 to 2010, but was shut down in June 2010 in order to begin a major upgrade of the accelerator and the detector. Belle played a crucial role in the award of the 2008 Nobel Prize for Physics to M. Kobayashi and T. Maskawa. The main physics goal of Belle was the measurement of CP-violation in the B-meson system.
This mission, as well as the search for physics beyond the Standard Model, has been passed to the Belle II experiment located at the SuperKEKB accelerator, the direct successors of the Belle experiment and KEKB respectively. The precise measurement of CP-violation and the search for rare or "forbidden" decays of the B-meson and the tau-lepton as signals for New Physics relies heavily on a large number of recorded events and the precision with which B-meson and lepton decay vertices can be reconstructed. Thus, the accelerator upgrade aims for an increase of the luminosity by a factor of 40, resulting in a peak luminosity of 8x10^35 cm^{-2} s^{-1}. This upgrade is scheduled to be finished by 2017 and will result in asymmetric beam energies of 7 GeV (e-) and 4 GeV (e+), provided by beams with a vertical size of only 48 nm ("nano-beam optics"), a size that has never been reached at any particle collider before.
The accelerator upgrade will result in the desired increase of the collision rate of particles, while it will also inevitably lead to an increase in the background for all sub-detectors. The Belle detector would not have been able to handle the new background conditions expected at SuperKEKB, hence an upgrade of the Belle detector to the Belle II detector was necessary. Additionally the upgrade aims to increase the physics performance of the detector, making it more sensitive to the effects of New Physics. The detector upgrade will see improvements and redesigns of almost all subsystems as well as the inclusion of a whole new sub-detector, the PiXel vertex Detector (PXD). The introduction of the PXD will ensure that decay vertices are reconstructed with an extremely high precision in the harsh background conditions expected at Belle II. The PXD is a semi-conductor based particle tracking detector and will be the innermost sub-detector of Belle II. It offers excellent track and vertex reconstruction capabilities, while having a thickness of only 75 μm in order to minimise multiple scattering effects.
Due to the innovative concept of a high-luminosity nano-beam accelerator, the scale of background being produced at the future SuperKEKB cannot be derived from a traditional electron-positron collider and has, therefore, to be simulated using first-principle Monte Carlo techniques. This thesis focuses on a detailed study of the expected background for the pixel vertex detector at the upcoming Belle II experiment. It starts with a comprehensive summary of the key components of the SuperKEKB accelerator and the Belle II detector before delving into the details of the Belle II simulation and reconstruction framework basf2. It was decided to develop the basf2 framework from scratch, rather than adapting the software framework used at Belle. The changes made in the upgrade from the Belle to the Belle II detector, would have required major modifications of nearly all existing libraries.
This thesis continues by explaining, in detail, the measurement and analysis of an experiment conducted at Belle in 2010, shortly before the KEKB accelerator and the Belle detector were shut down. The experiment aimed at establishing the validity of a major background for the PXD, namely the two-photon process into an electron-positron pair, described by the Monte-Carlo generators KoralW and BDK, which have never been tested in the kinematical region relevant for the PXD. From a comparison based on Monte Carlo data it is found that the difference between KoralW and BDK in the high cross-section, low pt region (smaller than 20 MeV) for the produced electron and positron is very small, and that both Monte-Carlo generators agree with the experiment in this important low momentum regime. However, the question arises as to whether the delivered cross-section of the Monte Carlo generators is correct over a wider phase space, but still below the centre-of-mass energies where these generators have been verified experimentally (e.g. at the e+e- colliders PETRA and LEP). In order to answer this question, a comparison between recorded detector data and Monte Carlo data is performed, an analysis that has never been done for centre-of-mass energies of the order of those of the Belle and Belle II experiments. From the results the conclusion is drawn that both Monte Carlo generators, KoralW and BDK, agree very nicely for low values of pt but differ significantly for intermediate values where the total cross-sections are already very small. The recorded data proved that for intermediate pt ranges the behaviour of BDK is correct, while KoralW overshoots the data. Since, however, the cross-section peaks strongly for low values of pt both generators can be used for further background studies.
Furthermore, this thesis includes a detailed basf2 simulation study of the major beam and QED backgrounds that are expected at Belle II and their impact on the PXD. Various figures of merit are estimated, such as particle flux, radiation dose and occupancy. On average the inner layer experiences a particle flux of 6.1 MHz cm^{-2} and the outer layer of 2.5 MHz cm^{-2}. The distribution of the particle flux along the global z-axis is fairly flat meaning that the radiation damage is evenly distributed along the PXD ladders. The simulation shows that the inner layer of the PXD is exposed to a radiation dose of 19.9 kGy/smy and the outer layer to a dose of 4.9 kGy/smy. Irradiation tests of DEPFET sensors with 10 MeV electrons showed that the sensors work reliably for a dose of at least 100 kGy. It is believed that they can even cope with up to 200 kGy. Using the radiation dose values obtained from the simulation, the numbers translate to a lifetime of roughly 10 years for the PXD sensors, the typical operation time of a high energy physics detector. The study shows that the expected PXD occupancy, summing over all background sources, is given by
inner layer: 1.28 +- 0.03 %
outer layer: 0.45 +- 0.01 %
The upper limit for the PXD, imposed by the data acquisition and the track reconstruction, is 3%. The estimated values are well below this limit and, thus, the PXD will withstand the harsh background conditions that are expected at Belle II
Linking stage-resolved population models with field observations: an integrated approach on population dynamics of Pseudocalanus elongatus in the German Bight.
The population dynamics of Pseudocalanus elongatus have been investigated within
the framework of the GLOBEC-Germany project to gain a better understanding
of its life cycle and population dynamics and to estimate secondary production in
the North Sea. During an intensive field study in the German Bight between
February and October 2004, experiments on reproduction were performed and
data on length of copepodids and abundance were collected to characterize the
population in the southern North Sea. This data set was used to update the literature-
based parameterization of a population model for P. elongatus to investigate the
population dynamics, life history and production in the German Bight. The
ability of data to improve population models is also discussed. Pseudocalanus elongatus
was found to be a major contributor to carbon uptake contributing about onethird
of copepod production. Though the spatial variability in field observations
was not reflected by the model, the simulation matched data within one order of
magnitude at most stations. The high-resolution field observations and experiments
mainly improved the parameterization of the reproductive parameters.
Mortality is found to be a critical parameter due to its influence on population
size. Using constant rates, though based on observation-derived estimates, seems
not to capture realistic variability. Our study confirms the need for experimental
and field data to build a robust parameterization for concentration-based
population models
Abhängigkeit des klinischen short-term Outcomes von der Zeit bis zur endovaskulären Rekanalisierung nach ischämischem Insult
Der akute ischämische Schlaganfall zählt weltweit zu den führenden Todesursachen.
Lange Zeit galt die Lysetherapie als Goldstandard zur Behandlung eines intrakraniellen
thrombotischen Gefäßverschlusses, sofern das enge Lysezeitfenster von 4,5 Stunden
eingehalten werden konnte. Eine große Anzahl von Patienten war jedoch von diesem
Therapieansatz aufgrund von Kontraindikationen (z.B. medikamentöser Blutverdünnung) oder einer Überschreitung des Zeitfensters (z.B. in Folge eines Wake-up Stroke)
ausgeschlossen.
Ende der 2000er Jahre wurden in der interventionellen Behandlung von akuten Schlaganfällen erstmals intrakranielle Stents angewendet. In den folgenden Jahren konnte in
diversen Studien die Überlegenheit der mechanischen Thrombektomie bei proximalen
Gefäßverschlüssen (z.B. von A. carotis interna oder A. cerebri media) gegenüber der
bis dahin als Goldstandard geltenden rein medikamentösen Lysetherapie nachgewiesen werden. Der große Durchbruch gelang im Jahr 2015 im Rahmen der sogenannten
Big-Five Studien: ESCAPE [23], EXTEND-IA [6], MR-CLEAN [19], REVASCAT [25] und
SWIFT-PRIME [41]. Die Ergebnisse der MR-CLEAN Studie waren so eindeutig, dass
die übrigen 4 Studien aus ethischen Gründen vorzeitig abgebrochen wurden. In Zusammenschau der Ergebnisse aller Big-Five Studien zeigte sich im Vergleich zur rein
systemischen Thrombolyse eine 2,42-fach höhere Wahrscheinlichkeit den Schlaganfall
mit nur minimalen neurologischen Ausfällen zu überleben. Die Sterblichkeitsrate sank
und das intrakranielle Blutungsrisiko war im Vergleich zur systemischen Lysetherapie
nicht erhöht. Mit der Stent-Retriever Methode konnte die Rate der erfolgreichen Rekanalisationen (TICI 2b oder höher) auf 70-90 % (zuvor 40-50 %) gesteigert werden. [16]
Im Jahr 2017 zeigten zwei weitere Studien, DAWN [35] und DEFUSE-3 [3], dass bei
sorgfältiger Patientenselektion das Zeitfenster für Thrombektomien deutlich erweitert
werden kann. Patienten, bei denen ein Mismatch zwischen dem Infarktvolumen in der
zerebralen Bildgebung und dem klinischen Defizit bestand, profitierten auch in einem
späten Zeitfenster von 6-24 Stunden nach den ersten Schlaganfallanzeichen noch von
einer mechanischen Thrombektomie. Die Sterblichkeit und das Auftreten von intraze-
2
rebralen Hämorrhagien waren im Vergleich zur Thrombektomie, innerhalb eines 6
Stunden Zeitfensters, nicht erhöht. [35]
In dieser retrospektiven Studie wurden sämtliche kraniellen Thrombektomien, die über
den Zeitraum eines Jahres in der interventionellen Neuroradiologie des Universitätsklinikum des Saarlandes (UKS) durchgeführt wurden, im Hinblick auf verschiedene Kriterien analysiert. Von September 2016 bis September 2017 wurden insgesamt 127 Patienten in Folge eines ischämischen Schlaganfalls am UKS thrombektomiert. Dabei wurde ein besonderer Fokus auf das short-term Outcome der Patienten in Abhängigkeit
von der vergangenen Zeit bis zur Rekanalisierung gelegt. Des Weiteren sollte ein für
diese Arbeit neu definiertes numerisches Klassifikationssystem zur Bewertung von CTPerfusionsbildern (Penumbra-Score) auf seine Voraussagekraft des späteren Outcomes getestet werden.
Während sich durchaus eine Abhängigkeit des Outcomes von der vergangenen Zeit
zwischen den ersten Schlaganfallsymptomen und dem erfolgten endovaskulären Eingriff darstellte, war diese doch in vielen der durchgeführten Untersuchungen schwächer
als erwartet. Betrachtete man nur die Zeitspanne zwischen den ersten Symptomen und
der initialen Bildgebung, so hatte der Faktor Zeit keinen signifikanten Einfluss auf das
spätere Outcome. Die Ergebnisse unterstützen das seit einigen Jahren praktizierte
Bridging- bzw. Mothership-Konzept bei thrombotischem Verschluss der großen Hirnarterien. Die nachteiligen Auswirkungen der zusätzlich verstrichenen Zeit durch den
Transport in ein neurologisches Zentrum mit Möglichkeit zur mechanischen Thrombektomie werden von den Vorteilen der besseren Behandlungsmöglichkeiten mehr als
ausgeglichen.
Der Penumbra-Score erwies sich als hervorragender Indikator für die Infarktarealgröße
und den Schweregrad des ischämischen Insults. Mit seiner Hilfe lassen sich schon zu
einem recht frühen Zeitpunkt in der Behandlung Rückschlüsse auf die spätere ungefähre Qualität des Outcomes ziehen.The acute ischemic stroke is one of the leading causes for death worldwide. Many years
thrombolysis was considered the gold standard treatment for intracranial thrombotic
vascular occlusion, provided the small time window of 4,5 hours could be followed. A lot
of patients were excluded by this therapy approach due to contraindications (e.g. medicinal anticoagulation) or exceeding of the time window (e.g. in case of wake-up strokes).
For the first time, intracranial stents were used in the interventional treatment of acute
stroke at the end of the 2000s. In the following years, the superiority of mechanical
thrombectomy in proximal vascular occlusions has been proven in several studies compared to medicinal thrombolysis therapy. The big breakthrough came in 2015 as part of
the so-called Big-Five studies: ESCAPE [23], EXTEND-IA [6], MR-CLEAN [19], REVASCAT [25] and SWIFT-PRIME [41]. The results of the MR-CLEAN study were so
clear that the remaining 4 studies were prematurely discontinued for ethical reasons. In
summary, the results of all Big-Five studies showed a 2.42-fold higher probability of surviving the stroke with minimal neurological deficits compared to pure systemic thrombolysis. The mortality rate decreased and the intracranial bleeding risk was not increased
compared to systemic lysis therapy. The rate of successful recanalizations (TICI 2b or
higher) was increased to 70-90% (previously 40-50%) using the stent-retriever method.
[14]
In 2017, the DAWN [35] and DEFUSE-3 [3] studies showed that, with careful patient
selection, the window of opportunity for thrombectomies can be significantly extended.
Patients with a mismatch between infarct volume in cerebral imaging and clinical deficit
benefited from mechanical thrombectomy even in a late time frame of 6-24 hours after
the first stroke indication. The mortality and incidence of intracerebral hemorrhage was
not increased compared to thrombectomy within a 6 hour time window. [30]
In this retrospective study, all cranial thrombectomies performed over a period of one
year in the interventional neuroradiology of the Saarland University Medical Center
(SUMC) were analyzed with regard to various criteria. From September 2016 to September 2017, a total of 127 patients were thrombectomized due to an ischemic stroke at
the SUMC. Special emphasis was placed on the short-term outcome of patients as a
function of the past time until recanalization. Furthermore, a newly defined numerical
classification system for the assessment of CT perfusion images (penumbra-score) was
to be tested for its predictive value of the later outcome.
4
While there was a dependency on the outcome of the past time between the first stroke
symptoms and the endovascular intervention, it was weaker than expected in many of
the conducted researches. Considering only the time span between the first symptoms
and the initial imaging, the time factor had no significant influence on the later outcome.
The results support the Bridging and Mothership concept for thrombotic occlusion of the
great cerebral arteries. The adverse effects of the extra time elapsed by transport to a
neurological center with potential for mechanical thrombectomy are more than offset by
the benefits of better treatment options.
The penumbra-score proved to be an excellent indicator of the size of the infarcted area
and the severity of the ischemic insult. On the basis of the penumbra-score it is possible
to draw conclusions on the subsequent approximate quality of the outcome at a very
early stage in the treatment
Comprehensive study of the background for the Pixel Vertex Detector at Belle II
The highly successful Belle experiment was located at the KEKB accelerator in Tsukuba, Japan. KEKB was an electron-positron ring accelerator running at the asymmetric energies of 8 GeV (e-) and 3.5 GeV (e+). The Belle experiment took data from 1999 to 2010, but was shut down in June 2010 in order to begin a major upgrade of the accelerator and the detector. Belle played a crucial role in the award of the 2008 Nobel Prize for Physics to M. Kobayashi and T. Maskawa. The main physics goal of Belle was the measurement of CP-violation in the B-meson system.
This mission, as well as the search for physics beyond the Standard Model, has been passed to the Belle II experiment located at the SuperKEKB accelerator, the direct successors of the Belle experiment and KEKB respectively. The precise measurement of CP-violation and the search for rare or "forbidden" decays of the B-meson and the tau-lepton as signals for New Physics relies heavily on a large number of recorded events and the precision with which B-meson and lepton decay vertices can be reconstructed. Thus, the accelerator upgrade aims for an increase of the luminosity by a factor of 40, resulting in a peak luminosity of 8x10^35 cm^{-2} s^{-1}. This upgrade is scheduled to be finished by 2017 and will result in asymmetric beam energies of 7 GeV (e-) and 4 GeV (e+), provided by beams with a vertical size of only 48 nm ("nano-beam optics"), a size that has never been reached at any particle collider before.
The accelerator upgrade will result in the desired increase of the collision rate of particles, while it will also inevitably lead to an increase in the background for all sub-detectors. The Belle detector would not have been able to handle the new background conditions expected at SuperKEKB, hence an upgrade of the Belle detector to the Belle II detector was necessary. Additionally the upgrade aims to increase the physics performance of the detector, making it more sensitive to the effects of New Physics. The detector upgrade will see improvements and redesigns of almost all subsystems as well as the inclusion of a whole new sub-detector, the PiXel vertex Detector (PXD). The introduction of the PXD will ensure that decay vertices are reconstructed with an extremely high precision in the harsh background conditions expected at Belle II. The PXD is a semi-conductor based particle tracking detector and will be the innermost sub-detector of Belle II. It offers excellent track and vertex reconstruction capabilities, while having a thickness of only 75 μm in order to minimise multiple scattering effects.
Due to the innovative concept of a high-luminosity nano-beam accelerator, the scale of background being produced at the future SuperKEKB cannot be derived from a traditional electron-positron collider and has, therefore, to be simulated using first-principle Monte Carlo techniques. This thesis focuses on a detailed study of the expected background for the pixel vertex detector at the upcoming Belle II experiment. It starts with a comprehensive summary of the key components of the SuperKEKB accelerator and the Belle II detector before delving into the details of the Belle II simulation and reconstruction framework basf2. It was decided to develop the basf2 framework from scratch, rather than adapting the software framework used at Belle. The changes made in the upgrade from the Belle to the Belle II detector, would have required major modifications of nearly all existing libraries.
This thesis continues by explaining, in detail, the measurement and analysis of an experiment conducted at Belle in 2010, shortly before the KEKB accelerator and the Belle detector were shut down. The experiment aimed at establishing the validity of a major background for the PXD, namely the two-photon process into an electron-positron pair, described by the Monte-Carlo generators KoralW and BDK, which have never been tested in the kinematical region relevant for the PXD. From a comparison based on Monte Carlo data it is found that the difference between KoralW and BDK in the high cross-section, low pt region (smaller than 20 MeV) for the produced electron and positron is very small, and that both Monte-Carlo generators agree with the experiment in this important low momentum regime. However, the question arises as to whether the delivered cross-section of the Monte Carlo generators is correct over a wider phase space, but still below the centre-of-mass energies where these generators have been verified experimentally (e.g. at the e+e- colliders PETRA and LEP). In order to answer this question, a comparison between recorded detector data and Monte Carlo data is performed, an analysis that has never been done for centre-of-mass energies of the order of those of the Belle and Belle II experiments. From the results the conclusion is drawn that both Monte Carlo generators, KoralW and BDK, agree very nicely for low values of pt but differ significantly for intermediate values where the total cross-sections are already very small. The recorded data proved that for intermediate pt ranges the behaviour of BDK is correct, while KoralW overshoots the data. Since, however, the cross-section peaks strongly for low values of pt both generators can be used for further background studies.
Furthermore, this thesis includes a detailed basf2 simulation study of the major beam and QED backgrounds that are expected at Belle II and their impact on the PXD. Various figures of merit are estimated, such as particle flux, radiation dose and occupancy. On average the inner layer experiences a particle flux of 6.1 MHz cm^{-2} and the outer layer of 2.5 MHz cm^{-2}. The distribution of the particle flux along the global z-axis is fairly flat meaning that the radiation damage is evenly distributed along the PXD ladders. The simulation shows that the inner layer of the PXD is exposed to a radiation dose of 19.9 kGy/smy and the outer layer to a dose of 4.9 kGy/smy. Irradiation tests of DEPFET sensors with 10 MeV electrons showed that the sensors work reliably for a dose of at least 100 kGy. It is believed that they can even cope with up to 200 kGy. Using the radiation dose values obtained from the simulation, the numbers translate to a lifetime of roughly 10 years for the PXD sensors, the typical operation time of a high energy physics detector. The study shows that the expected PXD occupancy, summing over all background sources, is given by
inner layer: 1.28 +- 0.03 %
outer layer: 0.45 +- 0.01 %
The upper limit for the PXD, imposed by the data acquisition and the track reconstruction, is 3%. The estimated values are well below this limit and, thus, the PXD will withstand the harsh background conditions that are expected at Belle II
BALLView : a molecular viewer and modeling tool
Over the last ten years, many molecular modeling software were developed, but most of them offer only limited capabilities or are rather difficult to use. This motivated us to create our own molecular viewer and modeling tool BALLView, based on our biochemical algorithms library BALL. Through its flexible and intuitive interface, BALLView provides a wide range of features in fields of electrostatic potentials, molecular mechanics, and molecular editing. In addition, BALLView is also a powerful molecular viewer with state-of-the-art graphics: it provides a variety of different models for biomolecular visualization, e.g. ball-and-stick models, molecular surfaces, or ribbon models. Since BALLView features a very intuitive graphical user interface, even inexperienced users have direct access to the full functionality. This makes BALLView particularly useful for teaching. For more advanced users, BALLView is extensible in different ways. First, extension on the level of C++ code is very convenient, since the the underlying code was designed as a modular development framework. Second, an interface to the scripting language Python allows the interactive rapid prototyping of new methods. BALLView is portable and runs on all major platforms (Windows, MacOS X, Linux, most Unix flavors). It is available free of charge under the GNU Public License (GPL) from our website (www.ballview.org).Im Laufe der letzten zehn Jahre wurden viele verschiedene Molecular Modeling Programme geschrieben, aber die meisten bieten nur eingeschränkte Funktionalität, oder sind sehr unintuiv zu bedienen. Dies impliziert, dass viele Forscher Probleme mit diesen Programmen haben und benutzerfreundlichere Software vorziehen würden. Dies inspirierte uns dazu,mit BALLView ein neuartiges Modellierungsprogramm zu entwickeln, basierend auf unserer biochemischen Algorithmenbibliothek BALL. Durch seine flexible Oberfläche bietet BALLView eine reiche Palette an Funktionen in den Bereichen Elektrostatik, Molekularmechanik und dem Edititieren von Molekülen an. Darüberhinaus ist BALLView auch ein leistungsfähiges Programm zur Visualisierung von Molekülen, das über Grafikfähigkeiten verfügt, die dem neuesten Stand der Technik entsprechen. BALLView unterstützt neben allen Standard-Molekülmodellen wie bspw. Stick, Cartoon, Ribbon und Oberflächen auch die Visualisierung von elektrostatischen Feldern. Alle aufgeführten Funktionen können auch von unerfahrenen Benutzern verwendet werden, da BALLView eine sehr intuitive Benutzeroberfläche besitzt. Dadurch ist es hervorragend geeignet zum Einsatz in der Lehre. Für fortgeschrittene Benutzer ist BALLView erweiterbar auf zwei unterschiedlichen Wegen: Durch das Design der zugrundeliegenden Klassenhierarchie sind Erweiterungen auf der Ebene des C++ Programmcodes sehr einfach zu realisieren. Desweiteren bietet BALLView ein Interface zur Skriptsprache Python, die interaktives Rapid-Prototyping von neuen Funktionen erlaubt. BALLView ist portierbar und kann auf allen verbreiteten Plattformen (Windows, MacOS X, Linux, die meisten Unix-Derivate) verwendet werden. Es ist frei verfügbar unter der LGPL Lizenz und kann von unserer Webseite heruntergeladen werden (www.ballview.org)
Determination of the rate limiting step during zearalenone hydrolysis by ZenA
Please click Additional Files below to see the full abstract
Changes in school alienation profiles among secondary school students and the role of teaching style. Results from a longitudinal study in Luxembourg and Switzerland
What students think about school has a major impact on learning and academic achievement. The multi-domain concept of school alienation distinguishes between alienation from learning, from teachers and from classmates. We aim to study a) alienation patterns among secondary school students, b) how school alienation profiles change from year 7 to year 9 and how secondary school students transition between profiles, and c) the role of teaching style for transitions between school alienation profiles. We draw on panel data of secondary school students from Luxembourg and Switzerland. Results of latent profile/latent transition analyses reveal distinct school alienation profiles, country differences and support for the idea that student-oriented, supportive teaching styles might prevent students from transitioning towards more-highly alienated profiles. (DIPF/Orig.
- …