198 research outputs found
Supervised Jet Clustering with Graph Neural Networks for Lorentz Boosted Bosons
Jet clustering is traditionally an unsupervised learning task because there
is no unique way to associate hadronic final states with the quark and gluon
degrees of freedom that generated them. However, for uncolored particles like
, , and Higgs bosons, it is possible to approximately (though not
exactly) associate final state hadrons to their ancestor. By labeling simulated
final state hadrons as descending from an uncolored particle, it is possible to
train a supervised learning method to create boson jets. Such a method much
operates on individual particles and identifies connections between particles
originating from the same uncolored particle. Graph neural networks are
well-suited for this purpose as they can act on unordered sets and naturally
create strong connections between particles with the same label. These networks
are used to train a supervised jet clustering algorithm. The kinematic
properties of these graph jets better match the properties of simulated
Lorentz-boosted bosons. Furthermore, the graph jets contain more
information for discriminating jets from generic quark jets. This work
marks the beginning of a new exploration in jet physics to use machine learning
to optimize the construction of jets and not only the observables computed from
jet constituents.Comment: 12 pages, 8 figures, data is published at
https://zenodo.org/record/3981290#.XzQs5zVlAUF, code is available at
https://github.com/xju2/root_gnn/releases/tag/v0.6.
Reproducibility of the dynamics of facial expressions in unilateral facial palsy
The aim of this study was to assess the reproducibility of non-verbal facial
expressions in unilateral facial paralysis using dynamic four-dimensional (4D)
imaging. The Di4D system was used to record five facial expressions of 20 adult
patients. The system captured 60 three-dimensional (3D) images per second; each
facial expression took 3–4 seconds which was recorded in real time. Thus a set of
180 3D facial images was generated for each expression. The procedure was
repeated after 30 min to assess the reproducibility of the expressions. A
mathematical facial mesh consisting of thousands of quasi-point ‘vertices’ was
conformed to the face in order to determine the morphological characteristics in a
comprehensive manner. The vertices were tracked throughout the sequence of the
180 images. Five key 3D facial frames from each sequence of images were
analyzed. Comparisons were made between the first and second capture of each
facial expression to assess the reproducibility of facial movements. Corresponding
images were aligned using partial Procrustes analysis, and the root mean square
distance between them was calculated and analyzed statistically (paired Student ttest,
P < 0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows
were reproducible. Facial expressions of maximum smile and forceful eye closure
were not reproducible. The limited coordination of various groups of facial muscles
contributed to the lack of reproducibility of these facial expressions. 4D imaging is a
useful clinical tool for the assessment of facial expressions
Comparison of the accuracy of voxel based registration and surface based registration for 3D assessment of surgical change following orthognathic surgery
Purpose:
Superimposition of two dimensional preoperative and postoperative facial images, including radiographs and photographs, are used to evaluate the surgical changes after orthognathic surgery. Recently, three dimensional (3D) imaging has been introduced allowing more accurate analysis of surgical changes. Surface based registration and voxel based registration are commonly used methods for 3D superimposition. The aim of this study was to evaluate and compare the accuracy of the two methods.<p></p>
Materials and methods:
Pre-operative and 6 months post-operative cone beam CT scan (CBCT) images of 31 patients were randomly selected from the orthognathic patient database at the Dental Hospital and School, University of Glasgow, UK. Voxel based registration was performed on the DICOM images (Digital Imaging Communication in Medicine) using Maxilim software (Medicim-Medical Image Computing, Belgium). Surface based registration was performed on the soft and hard tissue 3D models using VRMesh (VirtualGrid, Bellevue City, WA). The accuracy of the superimposition was evaluated by measuring the mean value of the absolute distance between the two 3D image surfaces. The results were statistically analysed using a paired Student t-test, ANOVA with post-hoc Duncan test, a one sample t-test and Pearson correlation coefficient test.<p></p>
Results:
The results showed no significant statistical difference between the two superimposition methods (p<0.05). However surface based registration showed a high variability in the mean distances between the corresponding surfaces compared to voxel based registration, especially for soft tissue. Within each method there was a significant difference between superimposition of the soft and hard tissue models.<p></p>
Conclusions:
There were no significant statistical differences between the two registration methods and it was unlikely to have any clinical significance. Voxel based registration was associated with less variability. Registering on the soft tissue in isolation from the hard tissue may not be a true reflection of the surgical change
Finite element analysis of porously punched prosthetic short stem virtually designed for simulative uncemented hip arthroplasty
Background:
There is no universal hip implant suitably fills all femoral types, whether prostheses of porous short-stem suitable for Hip Arthroplasty is to be measured scientifically.
Methods:
Ten specimens of femurs scanned by CT were input onto Mimics to rebuild 3D models; their *stl format dataset were imported into Geomagic-Studio for simulative osteotomy; the generated *.igs dataset were interacted by UG to fit solid models; the prosthesis were obtained by the same way from patients, and bored by punching bears designed by Pro-E virtually; cements between femora and prosthesis were extracted by deleting prosthesis; in HyperMesh, all compartments were assembled onto four artificial joint style as: (a) cemented long-stem prosthesis; (b) porous long-stem prosthesis; (c) cemented short-stem prosthesis; (d) porous short-stem prosthesis. Then, these numerical models of Finite Element Analysis were exported to AnSys for numerical solution.
Results:
Observed whatever from femur or prosthesis or combinational femora-prostheses, “Kruskal-Wallis” value p > 0.05 demonstrates that displacement of (d) ≈ (a) ≈ (b) ≈ (c) shows nothing different significantly by comparison with 600 N load. If stresses are tested upon prosthesis, (d) ≈ (a) ≈ (b) ≈ (c) is also displayed; if upon femora, (d) ≈ (a) ≈ (b) < (c) is suggested; if upon integral joint, (d) ≈ (a) < (b) < (c) is presented.
Conclusions:
Mechanically, these four sorts of artificial joint replacement are stabilized in quantity. Cemented short-stem prostheses present the biggest stress, while porous short-stem & cemented long-stem designs are equivalently better than porous long-stem prostheses and alternatives for femoral-head replacement. The preferred design of those two depends on clinical conditions. The cemented long-stem is favorable for inactive elders with osteoporosis, and porously punched cementless short-stem design is suitable for patients with osteoporosis, while the porously punched cementless short-stem is favorable for those with a cement allergy. Clinically, the strength of this study is to enable preoperative strategy to provide acute correction and decrease procedure time
Correlation of pre-operative cancer imaging techniques with post-operative gross and microscopic pathology images
In this paper, different algorithms for volume reconstruction from tomographic cross-sectional pathology slices are described and tested. A tissue-mimicking phantom made with a mixture of agar and aluminium oxide was sliced at different thickness as per pathological standard guidelines. Phantom model was also virtually sliced and reconstructed in software. Results showed that shape-based spline interpolation method was the most precise, but generated a volume underestimation of 0.5%
An Application of HEP Track Seeding to Astrophysical Data
We apply methods of particle track reconstruction in High Energy Physics
(HEP) to the search for distinct stellar populations in the Milky Way, using
the Gaia EDR3 data set. This was motivated by analogies between the 3D space
points in HEP detectors and the positions of stars (which are also points in a
coordinate space) and the way collections of space points correspond to
particle trajectories in the HEP, while collections of stars from distinct
populations (such as stellar streams) can resemble tracks. Track reconstruction
consists of multiple steps, the first one being seeding. In this note, we
describe our implementation and results of the seeding step to the search for
distinct stellar populations, and we indicate how the next steps will proceed.
Our seeding method uses machine learning tools from the FAISS library, such as
the k-nearest neighbors (kNN) search.Comment: 9 pages, 10 figures, 1 table. Conference proceedings preprint for
Connecting the Dots (CTD) 2023. Updated figures, fixed typo
Towards a deep learning model for hadronization
Hadronization is a complex quantum process whereby quarks and gluons become hadrons. The widely used models of hadronization in event generators are based on physically inspired phenomenological models with many free parameters. We propose an alternative approach whereby neural networks are used instead. Deep generative models are highly flexible, differentiable, and compatible with graphical processing units. We make the first step towards a data-driven machine learning-based hadronization model. In that step, we replace a component of the hadronization model within the Herwig event generator (cluster model) with HADML, a computer code implementing a generative adversarial network. We show that a HADML is capable of reproducing the kinematic properties of cluster decays. Furthermore, we integrate it into Herwig to generate entire events that can be compared with the output of the public Herwig simulator as well as with dat
Generative Machine Learning for Detector Response Modeling with a Conditional Normalizing Flow
In this paper, we explore the potential of generative machine learning models
as an alternative to the computationally expensive Monte Carlo (MC) simulations
commonly used by the Large Hadron Collider (LHC) experiments. Our objective is
to develop a generative model capable of efficiently simulating detector
responses for specific particle observables, focusing on the correlations
between detector responses of different particles in the same event and
accommodating asymmetric detector responses. We present a conditional
normalizing flow model (CNF) based on a chain of Masked Autoregressive Flows,
which effectively incorporates conditional variables and models
high-dimensional density distributions. We assess the performance of the \cnf
model using a simulated sample of Higgs boson decaying to diphoton events at
the LHC. We create reconstruction-level observables using a smearing technique.
We show that conditional normalizing flows can accurately model complex
detector responses and their correlation. This method can potentially reduce
the computational burden associated with generating large numbers of simulated
events while ensuring that the generated events meet the requirements for data
analyses.Comment: 16 pages, 6 figure
Recommended from our members
Generative machine learning for detector response modeling with a conditional normalizing flow
Abstract:
In this paper, we explore the potential of generative
machine learning models as an alternative to the computationally
expensive Monte Carlo (MC) simulations commonly used by the Large
Hadron Collider (LHC) experiments. Our objective is to develop a
generative model capable of efficiently simulating detector
responses for specific particle observables, focusing on the
correlations between detector responses of different particles in
the same event and accommodating asymmetric detector responses. We
present a conditional normalizing flow model (??ℱ) based
on a chain of Masked Autoregressive Flows, which effectively
incorporates conditional variables and models high-dimensional
density distributions. We assess the performance of the
??ℱ model using a simulated sample of Higgs boson
decaying to diphoton events at the LHC. We create
reconstruction-level observables using a smearing technique. We show
that conditional normalizing flows can accurately model complex
detector responses and their correlation. This method can
potentially reduce the computational burden associated with
generating large numbers of simulated events while ensuring that the
generated events meet the requirements for data analyses. We make
our code available at https://github.com/allixu/normalizing_flow_for_detector_response
- …