18 research outputs found

    A 3D cone beam computed tomography study of the styloid process of the temporal bone

    Get PDF
    Background: To investigate the length and three-dimensional orientation and to detail the morphological variations of the styloid process.Materials and methods: Forty-four patients undergoing temporal bone evaluation for different reasons were randomly selected and included in the present study. The length, angulation in the coronal and sagittal planes, as well as morphological variations of the styloid processes were assessed using conebeam computer tomography. Pearson’s correlation coefficient was used to test possible associations between the length of styloid process and angulations, as well as between angulations. Student’s t-test was used to compare the differencesbetween the sample mean length and angulations in normal and elongated styloid process groups.Results: The sagittal angle showed weak positive correlations with the styloid process length and the transverse angle (r = 0.24, p = 0.02, n = 88). A medium positive correlation was found between the sagittal and transverse angulations in the elongated styloid process group (r = 0.49, p = 0.0015, n = 38).There was a statistical significant difference between the mean sagittal angulation in elongated styloid and normal styloid process groups (p = 0.015). The styloid process morphology also varied in terms of shape, number, and degree of ossification.Conclusions: The morphometric and morphologic variations of the styloid process may be important factors to be taken into account not only from the viewpoint of styloid syndromes, but also in preoperatory planning and during surgery

    Progressive Neural Networks

    Full text link
    Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy

    The upgrade of the ALICE TPC with GEMs and continuous readout

    Get PDF
    The upgrade of the ALICE TPC will allow the experiment to cope with the high interaction rates foreseen for the forthcoming Run 3 and Run 4 at the CERN LHC. In this article, we describe the design of new readout chambers and front-end electronics, which are driven by the goals of the experiment. Gas Electron Multiplier (GEM) detectors arranged in stacks containing four GEMs each, and continuous readout electronics based on the SAMPA chip, an ALICE development, are replacing the previous elements. The construction of these new elements, together with their associated quality control procedures, is explained in detail. Finally, the readout chamber and front-end electronics cards replacement, together with the commissioning of the detector prior to installation in the experimental cavern, are presented. After a nine-year period of R&D, construction, and assembly, the upgrade of the TPC was completed in 2020.publishedVersio

    An inquiry into diffusion processes over interaction networks

    No full text
    This thesis aims to develop a comprehensive framework for modelling and controlling diffusion processes over interaction networks, striving to inform and improve public health policies against viral epidemics. Our work introduces four main contributions: (1) a new modelling technique that captures the heterogeneity and uncertainty of contact patterns and evaluates the impact of different testing and tracing strategies, which can be utilized in conjunction with any compartmental formulation to study complex spreading dynamics. Using this technique, we introduce and simulate a novel epidemiological model, SEIR-T, showing that contact tracing in a COVID-19 epidemic can be effective despite suboptimal digital uptakes or pervasive interview inefficiencies; (2) a versatile and cost-effective approach to optimizing the allocation of testing, tracing and vaccination resources based on the network structure and epidemic dynamics, which ranks individuals based on their role in the network and the epidemic state, being adaptable to the budget and risk preferences of regional policy makers, while still breaking high-risk transmission chains; (3) a reinforcement learning-based agent, underpinned by a highly transferable graph neural architecture, that can find optimal epidemic control policies from simulation data, outperforming standard heuristic approaches by up to 15% in the containment rate, while far surpassing more standard random samplers by margins of 50% or more; and (4) a range of visualization tools that can aid in understanding and communicating the effects of public health interventions to policy makers and the populace, whichinclude prediction explanation and state visualization techniques for scrutinizing the learning-based policies introduced, and other tools the authorities can use to assess the cost-benefit trade-off of enacting different combinations of interventions. The simulation-control framework we introduce is particularly flexible and can effectually model the spread of various pathogens or analogous diffusion processes, such as information dissemination. Similarly, the learned epidemic policies are versatile and easily transferable to a wide range of diffusion scenarios and network structures.<br/

    Correction of the baseline fluctuations in the GEM-based ALICE TPC

    No full text
    To operate the ALICE Time Projection Chamber in continuous mode during the Run 3 and Run 4 data-taking periods of the Large Hadron Collider, the multi-wire proportional chamber-based readout was replaced with gas-electron multipliers. As expected, the detector performance is affected by the so-called common-mode effect, which leads to significant baseline fluctuations. A detailed study of the pulse shape with the new readout has revealed that it is also affected by ion tails. Since reconstruction and data compression are performed fully online, these effects must be corrected at the hardware level in the FPGA-based common readout units. The characteristics of the common-mode effect and of the ion tail, as well as the algorithms developed for their online correction, are described in this paper. The common-mode dependencies are studied using machine-learning techniques. Toy Monte Carlo simulations are performed to illustrate the importance of online corrections and to investigate the performance of the developed algorithms.To operate the ALICE Time Projection Chamber in continuous mode during the Run~3 and Run~4 data-taking periods of the Large Hadron Collider, the multi-wire proportional chamber-based readout was replaced with gas-electron multipliers. As expected, the detector performance is affected by the so-called common-mode effect, which leads to significant baseline fluctuations. A detailed study of the pulse shape with the new readout has revealed that it is also affected by ion tails. Since reconstruction and data compression are performed fully online, these effects must be corrected at the hardware level in the FPGA-based common readout units. The characteristics of the common-mode effect and of the ion tail, as well as the algorithms developed for their online correction, are described in this paper. The common-mode dependencies are studied using machine-learning techniques. Toy Monte Carlo simulations are performed to illustrate the importance of online corrections and to investigate the performance of the developed algorithms
    corecore