36 research outputs found

    A Hybrid Neuro-Fuzzy Approach for Heterogeneous Patch Encoding in ViTs Using Contrastive Embeddings & Deep Knowledge Dispersion

    Get PDF
    Vision Transformers (ViT) are commonly utilized in image recognition and related applications. It delivers impressive results when it is pre-trained using massive volumes of data and then employed in mid-sized or small-scale image recognition evaluations such as ImageNet and CIFAR-100. Basically, it converts images into patches, and then the patch encoding is used to produce latent embeddings (linear projection and positional embedding). In this work, the patch encoding module is modified to produce heterogeneous embedding by using new types of weighted encoding. A traditional transformer uses two embeddings including linear projection and positional embedding. The proposed model replaces this with weighted combination of linear projection embedding, positional embedding and three additional embeddings called Spatial Gated, Fourier Token Mixing and Multi-layer perceptron Mixture embedding. Secondly, a Divergent Knowledge Dispersion (DKD) mechanism is proposed to propagate the previous latent information far in the transformer network. It ensures the latent knowledge to be used in multi headed attention for efficient patch encoding. Four benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100) are used for comparative performance evaluation. The proposed model is named as SWEKP-based ViT, where the term SWEKP stands for Stochastic Weighted Composition of Contrastive Embeddings & Divergent Knowledge Dispersion (DKD) for Heterogeneous Patch Encoding. The experimental results show that adding extra embeddings in transformer and integrating DKD mechanism increases performance for benchmark datasets. The ViT has been trained separately with combination of these embeddings for encoding. Conclusively, the spatial gated embedding with default embeddings outperforms Fourier Token Mixing and MLP-Mixture embeddings

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Learning representations of irregular particle-detector geometry with distance-weighted graph networks

    No full text
    We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors.We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors

    Multi-particle reconstruction in the High Granularity Calorimeter using object condensation and graph neural networks

    Get PDF
    The high-luminosity upgrade of the LHC will come with unprecedented physics and computing challenges. One of these challenges is the accurate reconstruction of particles in events with up to 200 simultaneous protonproton interactions. The planned CMS High Granularity Calorimeter offers fine spatial resolution for this purpose, with more than 6 million channels, but also poses unique challenges to reconstruction algorithms aiming to reconstruct individual particle showers. In this contribution, we propose an end-to-end machine-learning method that performs clustering, classification, and energy and position regression in one step while staying within memory and computational constraints. We employ GravNet, a graph neural network, and an object condensation loss function to achieve this task. Additionally, we propose a method to relate truth showers to reconstructed showers by maximising the energy weighted intersection over union using maximal weight matching. Our results show the efficiency of our method and highlight a promising research direction to be investigated further

    End-to-end multi-particle reconstruction in high occupancy imaging calorimeters with graph neural networks

    No full text
    Abstract We present an end-to-end reconstruction algorithm to build particle candidates from detector hits in next-generation granular calorimeters similar to that foreseen for the high-luminosity upgrade of the CMS detector. The algorithm exploits a distance-weighted graph neural network, trained with object condensation, a graph segmentation technique. Through a single-shot approach, the reconstruction task is paired with energy regression. We describe the reconstruction performance in terms of efficiency as well as in terms of energy resolution. In addition, we show the jet reconstruction performance of our method and discuss its inference computational cost. To our knowledge, this work is the first-ever example of single-shot calorimetric reconstruction of O(1000){\mathcal {O}}(1000) O ( 1000 ) particles in high-luminosity conditions with 200 pileup

    A Hybrid Neuro-Fuzzy Approach for Heterogeneous Patch Encoding in ViTs Using Contrastive Embeddings and Deep Knowledge Dispersion

    Get PDF
    Vision Transformers (ViT) are commonly utilized in image recognition and related applications. It delivers impressive results when it is pre-trained using massive volumes of data and then employed in mid-sized or small-scale image recognition evaluations such as ImageNet and CIFAR-100. Basically, it converts images into patches, and then the patch encoding is used to produce latent embeddings (linear projection and positional embedding). In this work, the patch encoding module is modified to produce heterogeneous embedding by using new types of weighted encoding. A traditional transformer uses two embeddings including linear projection and positional embedding. The proposed model replaces this with weighted combination of linear projection embedding, positional embedding and three additional embeddings called Spatial Gated, Fourier Token Mixing and Multi-layer perceptron Mixture embedding. Secondly, a Divergent Knowledge Dispersion (DKD) mechanism is proposed to propagate the previous latent information far in the transformer network. It ensures the latent knowledge to be used in multi headed attention for efficient patch encoding. Four benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100) are used for comparative performance evaluation. The proposed model is named as SWEKP-based ViT, where the term SWEKP stands for Stochastic Weighted Composition of Contrastive Embeddings & Divergent Knowledge Dispersion (DKD) for Heterogeneous Patch Encoding. The experimental results show that adding extra embeddings in transformer and integrating DKD mechanism increases performance for benchmark datasets. The ViT has been trained separately with combination of these embeddings for encoding. Conclusively, the spatial gated embedding with default embeddings outperforms Fourier Token Mixing and MLP-Mixture embeddings

    A correlational study: Establishing the link between quantum parameters and particle dynamics around Schwarzschild black hole

    No full text
    The field of point particle dynamics is correlated with the study of particle dynamics around the black hole and one can initiate the study of a more complex motion of extended/celestial bodies. In this article, we consider the study of the dynamics of the neutral/charged particles around the quantum corrected-Schwarzschild black hole and Schwarzschild black hole. We investigate Noether symmetries and their conservation laws corresponding to the quantum corrected-Schwarzschild spacetime. We also study the effect of angular momentum, quantum parameters and magnetic field on the dynamics of neutral and charged particles around the quantum corrected-Schwarzschild black hole and Schwarzschild black hole

    GNN-based end-to-end reconstruction in the CMS Phase 2 High-Granularity Calorimeter

    No full text
    We present the current stage of research progress towards a one-pass, completely Machine Learning (ML) based imaging calorimeter reconstruction. The model used is based on Graph Neural Networks (GNNs) and directly analyzes the hits in each HGCAL endcap. The ML algorithm is trained to predict clusters of hits originating from the same incident particle by labeling the hits with the same cluster index. We impose simple criteria to assess whether the hits associated as a cluster by the prediction are matched to those hits resulting from any particular individual incident particles. The algorithm is studied by simulating two tau leptons in each of the two HGCAL endcaps, where each tau may decay according to its measured standard model branching probabilities. The simulation includes the material interaction of the tau decay products which may create additional particles incident upon the calorimeter. Using this varied multiparticle environment we can investigate the application of this reconstruction technique and begin to characterize energy containment and performance.We present the current stage of research progress towards a one-pass, completely Machine Learning (ML) based imaging calorimeter reconstruction. The model used is based on Graph Neural Networks (GNNs) and directly analyzes the hits in each HGCAL endcap. The ML algorithm is trained to predict clusters of hits originating from the same incident particle by labeling the hits with the same cluster index. We impose simple criteria to assess whether the hits associated as a cluster by the prediction are matched to those hits resulting from any particular individual incident particles. The algorithm is studied by simulating two tau leptons in each of the two HGCAL endcaps, where each tau may decay according to its measured standard model branching probabilities. The simulation includes the material interaction of the tau decay products which may create additional particles incident upon the calorimeter. Using this varied multiparticle environment we can investigate the application of this reconstruction technique and begin to characterize energy containment and performance
    corecore