3,945 research outputs found

    Radiation-hard ASICs for optical data transmission in the ATLAS pixel detector

    Full text link
    We have developed two radiation-hard ASICs for optical data transmission in the ATLAS pixel detector at the LHC at CERN: a driver chip for a Vertical Cavity Surface Emitting Laser (VCSEL) diode for 80 Mbit/s data transmission from the detector, and a Bi-Phase Mark decoder chip to recover the control data and 40 MHz clock received optically by a PIN diode. We have successfully implemented both ASICs in 0.25 um CMOS technology using enclosed layout transistors and guard rings for increased radiation hardness. We present results from prototype circuits and from irradiation studies with 24 GeV protons up to 57 Mrad (1.9 x 10e15 p/cm2).Comment: 8th Tropical Seminar on Innovative Particle and Radiation Detectors, Siena, Italy (2002

    Systolic blood pressure, chronic obstructive pulmonary disease and cardiovascular risk

    Get PDF
    \ua9 2023 Author(s) (or their employer(s)). Objective: In individuals with complex underlying health problems, the association between systolic blood pressure (SBP) and cardiovascular disease is less well recognised. The association between SBP and risk of cardiovascular events in patients with chronic obstructive pulmonary disease (COPD) was investigated. Methods: and analysis In this cohort study, 39 602 individuals with a diagnosis of COPD aged 55-90 years between 1990 and 2009 were identified from validated electronic health records (EHR) in the UK. The association between SBP and risk of cardiovascular end points (composite of ischaemic heart disease, heart failure, stroke and cardiovascular death) was analysed using a deep learning approach. Results: In the selected cohort (46.5% women, median age 69 years), 10 987 cardiovascular events were observed over a median follow-up period of 3.9 years. The association between SBP and risk of cardiovascular end points was found to be monotonic; the lowest SBP exposure group of <120 mm Hg presented nadir of risk. With respect to reference SBP (between 120 and 129 mm Hg), adjusted risk ratios for the primary outcome were 0.99 (95% CI 0.93 to 1.05) for SBP of <120 mm Hg, 1.02 (0.97 to 1.07) for SBP between 130 and 139 mm Hg, 1.07 (1.01 to 1.12) for SBP between 140 and 149 mm Hg, 1.11 (1.05 to 1.17) for SBP between 150 and 159 mm Hg and 1.16 (1.10 to 1.22) for SBP ≥160 mm Hg. Conclusion: Using deep learning for modelling EHR, we identified a monotonic association between SBP and risk of cardiovascular events in patients with COPD

    Using hierarchical octrees in Monte Carlo radiative transfer simulations

    Get PDF
    A crucial aspect of 3D Monte Carlo radiative transfer is the choice of the spatial grid used to partition the dusty medium. We critically investigate the use of octree grids in Monte Carlo dust radiative transfer, with two different octree construction algorithms (regular and barycentric subdivision) and three different octree traversal algorithms (top-down, neighbour list, and the bookkeeping method). In general, regular octree grids need higher levels of subdivision compared to the barycentric grids for a fixed maximum cell mass threshold criterion. The total number of grid cells, however, depends on the geometry of the model. Surprisingly, regular octree grid simulations turn out to be 10 to 20% more efficient in run time than the barycentric grid simulations, even for those cases where the latter contain fewer grid cells than the former. Furthermore, we find that storing neighbour lists for each cell in an octree, ordered according to decreasing overlap area, is worth the additional memory and implementation overhead: using neighbour lists can cut down the grid traversal by 20% compared to the traditional top-down method. In conclusion, the combination of a regular node subdivision and the neighbour list method results in the most efficient octree structure for Monte Carlo radiative transfer simulations.Comment: 6 pages, 1 figure, accepted for publication in Astronomy and Astrophysic

    Extended chiral algebras in the SU(2)_0 WZNW model

    Get PDF
    We investigate the W-algebras generated by the integer dimension chiral primary operators of the SU(2)_0 WZNW model. These have a form almost identical to that found in the c=-2 model but have, in addition, an extended Kac-Moody structure. Moreover on Hamiltonian reduction these SU(2)_0 W-algebras exactly reduce to those found in c=-2. We explicitly find the free field representations for the chiral j=2 and j=3 operators which have respectively a fermionic doublet and bosonic triplet nature. The correlation functions of these operators accounts for the rational solutions of the Knizhnik-Zamolodchikov equation that we find. We explicitly compute the full algebra of the j=2 operators and find that the associativity of the algebra is only guaranteed if certain null vectors decouple from the theory. We conjecture that these algebras may produce a quasi-rational conformal field theory.Comment: 18 pages LATEX. Minor corrections. Full j=2 algebra adde

    Hi-BEHRT: Hierarchical Transformer-Based Model for Accurate Prediction of Clinical Events Using Multimodal Longitudinal Electronic Health Records

    Get PDF
    \ua9 2022 IEEE. Electronic health records (EHR) represent a holistic overview of patients\u27 trajectories. Their increasing availability has fueled new hopes to leverage them and develop accurate risk prediction models for a wide range of diseases. Given the complex interrelationships of medical records and patient outcomes, deep learning models have shown clear merits in achieving this goal. However, a key limitation of current study remains their capacity in processing long sequences, and long sequence modelling and its application in the context of healthcare and EHR remains unexplored. Capturing the whole history of medical encounters is expected to lead to more accurate predictions, but the inclusion of records collected for decades and from multiple resources can inevitably exceed the receptive field of the most existing deep learning architectures. This can result in missing crucial, long-term dependencies. To address this gap, we present Hi-BEHRT, a hierarchical Transformer-based model that can significantly expand the receptive field of Transformers and extract associations from much longer sequences. Using a multimodal large-scale linked longitudinal EHR, the Hi-BEHRT exceeds the state-of-the-art deep learning models 1% to 5% for area under the receiver operating characteristic (AUROC) curve and 1% to 8% for area under the precision recall (AUPRC) curve on average, and 2% to 8% (AUROC) and 2% to 11% (AUPRC) for patients with long medical history for 5-year heart failure, diabetes, chronic kidney disease, and stroke risk prediction. Additionally, because pretraining for hierarchical Transformer is not well-established, we provide an effective end-to-end contrastive pre-training strategy for Hi-BEHRT using EHR, improving its transferability on predicting clinical events with relatively small training dataset

    Validation of risk prediction models applied to longitudinal electronic health record data for the prediction of major cardiovascular events in the presence of data shifts

    Get PDF
    \ua9 2022 The Author(s). Published by Oxford University Press on behalf of the European Society of Cardiology. Aims: Deep learning has dominated predictive modelling across different fields, but in medicine it has been met with mixed reception. In clinical practice, simple, statistical models and risk scores continue to inform cardiovascular disease risk predictions. This is due in part to the knowledge gap about how deep learning models perform in practice when they are subject to dynamic data shifts; a key criterion that common internal validation procedures do not address. We evaluated the performance of a novel deep learning model, BEHRT, under data shifts and compared it with several ML-based and established risk models. Methods and results: Using linked electronic health records of 1.1 million patients across England aged at least 35 years between 1985 and 2015, we replicated three established statistical models for predicting 5-year risk of incident heart failure, stroke, and coronary heart disease. The results were compared with a widely accepted machine learning model (random forests), and a novel deep learning model (BEHRT). In addition to internal validation, we investigated how data shifts affect model discrimination and calibration. To this end, we tested the models on cohorts from (i) distinct geographical regions; (ii) different periods. Using internal validation, the deep learning models substantially outperformed the best statistical models by 6%, 8%, and 11% in heart failure, stroke, and coronary heart disease, respectively, in terms of the area under the receiver operating characteristic curve. Conclusion: The performance of all models declined as a result of data shifts; despite this, the deep learning models maintained the best performance in all risk prediction tasks. Updating the model with the latest information can improve discrimination but if the prior distribution changes, the model may remain miscalibrated

    Systolic Blood Pressure and Cardiovascular Risk in Patients with Diabetes: A Prospective Cohort Study

    Get PDF
    \ua9 2023 Lippincott Williams and Wilkins. All rights reserved. Background: Whether the association between systolic blood pressure (SBP) and risk of cardiovascular disease is monotonic or whether there is a nadir of optimal blood pressure remains controversial. We investigated the association between SBP and cardiovascular events in patients with diabetes across the full spectrum of SBP. Methods: A cohort of 49 000 individuals with diabetes aged 50 to 90 years between 1990 and 2005 was identified from linked electronic health records in the United Kingdom. Associations between SBP and cardiovascular outcomes (ischemic heart disease, heart failure, stroke, and cardiovascular death) were analyzed using a deep learning approach. Results: Over a median follow-up of 7.3 years, 16 378 cardiovascular events were observed. The relationship between SBP and cardiovascular events followed a monotonic pattern, with the group with the lowest baseline SBP of <120 mm Hg exhibiting the lowest risk of cardiovascular events. In comparison to the reference group with the lowest SBP (<120 mm Hg), the adjusted risk ratio for cardiovascular disease was 1.03 (95% CI, 0.97-1.10) for SBP between 120 and 129 mm Hg, 1.05 (0.99-1.11) for SBP between 130 and 139 mm Hg, 1.08 (1.01-1.15) for SBP between 140 and 149 mm Hg, 1.12 (1.03-1.20) for SBP between 150 and 159 mm Hg, and 1.19 (1.09-1.28) for SBP ≥160 mm Hg. Conclusions: Using deep learning modeling, we found a monotonic relationship between SBP and risk of cardiovascular outcomes in patients with diabetes, without evidence of a J-shaped relationship

    Evaluating cognitive load of multimedia learning by eye-tracking data analysis

    Get PDF
    Background and Objectives: Today, it is common to use multimedia in foreign language teaching. There are some principles for designing multimedia that would reduce task cognitive load. These principles are based on the cognitive load theory. The methods of cognitive load measurement are divided into two categories, namely the subjective and objective measurements. NASA-TLX is an example of the subjective measurements; methods such as electroencephalography and eye-tracking are among the objective measurements. Due to the advantages of objective measurements, using these methods is common in cognitive studies. Eye-tracking technology can record different eye-movements of humans such as pupil dilation, saccades, fixations, blinks and microsaccades with a high sampling rate. These measurements are being widely used in cognitive and mental workload studies. In this paper, the cognitive load in multimedia language learning has been evaluated, using eye-tracking data analysis. Methods: Two multimedia versions for teaching English were produced with the same narration and the length of 342s. In one version, the principles in designing multimedia were applied whereas in the other version, they were violated so that more cognitive load in comparison to the former version could be imposed. Ten subjects whose English listening comprehension was assessed with a simulation of the International English Language Testing System (IELTS) participated in the experiment and were randomly divided into two equal groups of five. The two groups were homogeneous with respect to their listening proficiency. One group watched the multimedia without principles while the other group watched the multimedia with principles. Then, each individual answered 12 multiple choice questions about the concepts presented in the multimedia as a performance test. During watching the multimedia and taking the performance test, the participants’ eye movement data were recorded. Then, each person filled out the NASA-TLX Questionnaire. Based on the results of the performance test and the NASA-TLX, the difficulty level of the multimedia without principles as compared to its version with principles was evaluated. The collected data were divided into blocks of 30 seconds. Findings: Based on the NASA-TLX, the group who watched multimedia without principles experienced more cognitive load in comparison to the group who watched multimedia with principles, which approved our assumption about the higher load of the multimedia without principles. However, no significant difference was found in the results of the performance test between the two groups. According to statistical analyses, the pupil diameter, saccade length, saccade velocity, blink latency, and microsaccade amplitude in the multimedia blocks of both groups were significantly different. Nevertheless, no significant difference was found between the two groups in terms of the fixation time, the fixation rate, and the microsaccade rate. Conclusion: Based on the findings of this study, pupil dilation, saccade length, saccade velocity, blink latency, and microsaccade amplitude have a significant relationship with the amount of the load imposed by the instructional multimedia which corresponds to the literature review of the study. Based on the results of this study, along with the subjective methods, eye movement data can also be considered as an appropriate tool for assessing the cognitive load imposed by multimedia learning and qualifying the multimedia instructional content. A significant difference was also found between the two groups in the study in terms of their blinking rate.  More investigation and different experiments are needed for examining other eye movement criteria that have been investigated in this study, including fixation time, fixation rate, and microsaccade rate so that a more definitive conclusion would be reached regarding a significant relationship between these parameters and the mental load imposed by the multimedia English teaching.     ===================================================================================== COPYRIGHTS  ©2021 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.  ====================================================================================
    corecore