42 research outputs found

    The German-Tunisian project at Dougga

    Get PDF

    The German-Tunisian project at Dougga

    Get PDF

    Clinical and virological characteristics of hospitalised COVID-19 patients in a German tertiary care centre during the first wave of the SARS-CoV-2 pandemic: a prospective observational study

    Get PDF
    Purpose: Adequate patient allocation is pivotal for optimal resource management in strained healthcare systems, and requires detailed knowledge of clinical and virological disease trajectories. The purpose of this work was to identify risk factors associated with need for invasive mechanical ventilation (IMV), to analyse viral kinetics in patients with and without IMV and to provide a comprehensive description of clinical course. Methods: A cohort of 168 hospitalised adult COVID-19 patients enrolled in a prospective observational study at a large European tertiary care centre was analysed. Results: Forty-four per cent (71/161) of patients required invasive mechanical ventilation (IMV). Shorter duration of symptoms before admission (aOR 1.22 per day less, 95% CI 1.10-1.37, p < 0.01) and history of hypertension (aOR 5.55, 95% CI 2.00-16.82, p < 0.01) were associated with need for IMV. Patients on IMV had higher maximal concentrations, slower decline rates, and longer shedding of SARS-CoV-2 than non-IMV patients (33 days, IQR 26-46.75, vs 18 days, IQR 16-46.75, respectively, p < 0.01). Median duration of hospitalisation was 9 days (IQR 6-15.5) for non-IMV and 49.5 days (IQR 36.8-82.5) for IMV patients. Conclusions: Our results indicate a short duration of symptoms before admission as a risk factor for severe disease that merits further investigation and different viral load kinetics in severely affected patients. Median duration of hospitalisation of IMV patients was longer than described for acute respiratory distress syndrome unrelated to COVID-19

    Belle II Pixel Detector Commissioning and Operational Experience

    Get PDF

    Status of the BELLE II Pixel Detector

    Get PDF
    The Belle II experiment at the super KEK B-factory (SuperKEKB) in Tsukuba, Japan, has been collecting e+e−e^+e^− collision data since March 2019. Operating at a record-breaking luminosity of up to 4.7×1034cm−2s−14.7×10^{34} cm^{−2}s^{−1}, data corresponding to 424fb−1424 fb^{−1} has since been recorded. The Belle II VerteX Detector (VXD) is central to the Belle II detector and its physics program and plays a crucial role in reconstructing precise primary and decay vertices. It consists of the outer 4-layer Silicon Vertex Detector (SVD) using double sided silicon strips and the inner two-layer PiXel Detector (PXD) based on the Depleted P-channel Field Effect Transistor (DePFET) technology. The PXD DePFET structure combines signal generation and amplification within pixels with a minimum pitch of (50×55)ÎŒm2(50×55) ÎŒm^2. A high gain and a high signal-to-noise ratio allow thinning the pixels to 75ÎŒm75 ÎŒm while retaining a high pixel hit efficiency of about 9999%. As a consequence, also the material budget of the full detector is kept low at ≈0.21≈0.21%XX0\frac{X}{X_0} per layer in the acceptance region. This also includes contributions from the control, Analog-to-Digital Converter (ADC), and data processing Application Specific Integrated Circuits (ASICs) as well as from cooling and support structures. This article will present the experience gained from four years of operating PXD; the first full scale detector employing the DePFET technology in High Energy Physics. Overall, the PXD has met the expectations. Operating in the intense SuperKEKB environment poses many challenges that will also be discussed. The current PXD system remains incomplete with only 20 out of 40 modules having been installed. A full replacement has been constructed and is currently in its final testing stage before it will be installed into Belle II during the ongoing long shutdown that will last throughout 2023

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    The coming decade of digital brain research: a vision for neuroscience at the intersection of technology and computing

    Get PDF
    In recent years, brain research has indisputably entered a new epoch, driven by substantial methodological advances and digitally enabled data integration and modelling at multiple scales— from molecules to the whole brain. Major advances are emerging at the intersection of neuroscience with technology and computing. This new science of the brain combines high-quality research, data integration across multiple scales, a new culture of multidisciplinary large-scale collaboration and translation into applications. As pioneered in Europe’s Human Brain Project (HBP), a systematic approach will be essential for meeting the coming decade’s pressing medical and technological challenges. The aims of this paper are to: develop a concept for the coming decade of digital brain research, discuss this new concept with the research community at large, to identify points of convergence, and derive therefrom scientific common goals; provide a scientific framework for the current and future development of EBRAINS, a research infrastructure resulting from the HBP’s work; inform and engage stakeholders, funding organisations and research institutions regarding future digital brain research; identify and address the transformational potential of comprehensive brain models for artificial intelligence, including machine learning and deep learning; outline a collaborative approach that integrates reflection, dialogues and societal engagement on ethical and societal opportunities and challenges as part of future neuroscience research

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    The German-Tunisian project at Dougga : First results of the excavations south of the Maison du Trifolium (Introduction en Français)

    No full text
    Die deutsch-tunesischen Ausgrabungen in einem Wohnquartier im SĂŒden Thuggas (2001-2003) haben es ermöglicht, einen exemplarischen Einblick in die Geschichte der Stadt zu gewinnen .· von prĂ€historischer Zeit bis in die SpĂ€tantike.The German-Tunisian excavation in a residential quarter in the south of Dougga (2001-2003) has allowed to gain an exemplary insight into the city's history, from prehistoric times to late antiquity.Les fouilles germano-tunisiennes dans une quartier d'habitation au sud de Dougga (2001-2003) ont permis de gagner des informations exemplaires sur l'histoire de la citĂ©, de la prĂ©histoire jusqu'Ă  l'antiquitĂ© tardive.Khanoussi Mustapha, Ritter Stefan, von Rummel Philipp. The German-Tunisian project at Dougga : First results of the excavations south of the Maison du Trifolium (Introduction en Français). In: AntiquitĂ©s africaines, 40-41,2004. pp. 43-66

    Impact of Multiparametric Stone Measurement in Noncontrast Computer Tomography on Ureterorenoscopic Stone Removal

    No full text
    Purpose: Low-dose computer tomography (NCCT) is the standard imaging modality for patients with acute flank pain with a suspicion of urolithiasis. The stone size is usually measured 2D by a radiologist. We compared 3D stone measurement using different windows to the 2D measurement and evaluated the clinical impact on ureterorenoscopic stone removal (URS). Methods: One hundred sixty-four patients (201 stones) with a preoperative NCCT, following a URS within 4 weeks, were included in this study. Stone location, number and size of stones, operating time, and laser lithotripsy were documented. Stones were measured in 3D using bone and soft tissue window. The maximum diameter was compared to the radiological report. The U test, Kruskal-Wallis, and regression were used for statistical analyses. Results: Almost two-thirds (64.68%; 130 stones) of stone measurements in 3D with the bone window were lower than the radiologist reports in 2D. One-third (34.83%; 70 stones) of stone measurements were higher and 0.5% (1 stone) reported the same size. Using the 3D soft tissue window, 81.09% (163 stones), 17.91% (37 stones), and 1% (2 stones) of stones were measured bigger, smaller, or had the same measurement results, respectively. In the clinical setting, we could calculate a cutoff for laser lithotripsy at a maximum stone diameter of 5.70 mm (p < 0.01) with the 3D and 6.01 mm with the 2D measurements, respectively, and found a significant correlation between maximum stone diameter and operating time (p < 0.01) and number of stones and operating time (p < 0.01 with and p = 0.02 without laser). Conclusion: 3D stone measurement with bone window seems to be more accurate than 2D measurement, but 2D is sufficient for planning stone treatment. (c) 2021 S. Karger AG, Base
    corecore