134 research outputs found
On an hypercomplex generalization of Gould-Hopper and related Chebyshev polynomials
An operational approach introduced by Gould and Hopper to the construction of generalized Hermite polynomials is followed in the hypercomplex context to build multidimensional generalized Hermite polynomials by the consideration of an appropriate basic set of monogenic polynomials. Directly related functions, like Chebyshev polynomials of first and second kind are constructed
Segal-Bargmann-Fock modules of monogenic functions
In this paper we introduce the classical Segal-Bargmann transform starting
from the basis of Hermite polynomials and extend it to Clifford algebra-valued
functions. Then we apply the results to monogenic functions and prove that the
Segal-Bargmann kernel corresponds to the kernel of the Fourier-Borel transform
for monogenic functionals. This kernel is also the reproducing kernel for the
monogenic Bargmann module.Comment: 11 page
Towards real-time 6D pose estimation of objects in single-view cone-beam X-ray
Deep learning-based pose estimation algorithms can successfully estimate the
pose of objects in an image, especially in the field of color images. 6D Object
pose estimation based on deep learning models for X-ray images often use custom
architectures that employ extensive CAD models and simulated data for training
purposes. Recent RGB-based methods opt to solve pose estimation problems using
small datasets, making them more attractive for the X-ray domain where medical
data is scarcely available. We refine an existing RGB-based model
(SingleShotPose) to estimate the 6D pose of a marked cube from grayscale X-ray
images by creating a generic solution trained on only real X-ray data and
adjusted for X-ray acquisition geometry. The model regresses 2D control points
and calculates the pose through 2D/3D correspondences using
Perspective-n-Point(PnP), allowing a single trained model to be used across all
supporting cone-beam-based X-ray geometries. Since modern X-ray systems
continuously adjust acquisition parameters during a procedure, it is essential
for such a pose estimation network to consider these parameters in order to be
deployed successfully and find a real use case. With a 5-cm/5-degree accuracy
of 93% and an average 3D rotation error of 2.2 degrees, the results of the
proposed approach are comparable with state-of-the-art alternatives, while
requiring significantly less real training examples and being applicable in
real-time applications.Comment: Published at SPIE Medical Imaging 202
Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging
Data uncertainties, such as sensor noise or occlusions, can introduce
irreducible ambiguities in images, which result in varying, yet plausible,
semantic hypotheses. In Machine Learning, this ambiguity is commonly referred
to as aleatoric uncertainty. Latent density models can be utilized to address
this problem in image segmentation. The most popular approach is the
Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize
the conditional data log-likelihood Evidence Lower Bound. In this work, we
demonstrate that the PU- Net latent space is severely inhomogenous. As a
result, the effectiveness of gradient descent is inhibited and the model
becomes extremely sensitive to the localization of the latent space samples,
resulting in defective predictions. To address this, we present the Sinkhorn
PU-Net (SPU-Net), which uses the Sinkhorn Divergence to promote homogeneity
across all latent dimensions, effectively improving gradient-descent updates
and model robustness. Our results show that by applying this on public datasets
of various clinical segmentation problems, the SPU-Net receives up to 11%
performance gains compared against preceding latent variable models for
probabilistic segmentation on the Hungarian-Matched metric. The results
indicate that by encouraging a homogeneous latent space, one can significantly
improve latent density modeling for medical image segmentation.Comment: 12 pages incl. references, 11 figure
q-deformed harmonic and Clifford analysis and the q-Hermite and Laguerre polynomials
We define a q-deformation of the Dirac operator, inspired by the one
dimensional q-derivative. This implies a q-deformation of the partial
derivatives. By taking the square of this Dirac operator we find a
q-deformation of the Laplace operator. This allows to construct q-deformed
Schroedinger equations in higher dimensions. The equivalence of these
Schroedinger equations with those defined on q-Euclidean space in quantum
variables is shown. We also define the m-dimensional q-Clifford-Hermite
polynomials and show their connection with the q-Laguerre polynomials. These
polynomials are orthogonal with respect to an m-dimensional q-integration,
which is related to integration on q-Euclidean space. The q-Laguerre
polynomials are the eigenvectors of an su_q(1|1)-representation
Aanvullend waardplantenonderzoek van Meloidogyne fallax Karssen, 1996
De wortelknobbelaaltjes Meloidogyne chitwoodi en M.fallax staan op de quarantainelijsten van de EU en de EPPO.De levenswijze van deze nematoden, hun potentiële schade en hun schaarse verspreiding in Europa waren aanleiding voor de quarantaine status. Beide nematoden zijn in Nederland aanwezig. Voor enkele gewassen is waardplantgeschiktheid voor M. chitiwoodi bekend, maar ontbreekt die van M.fallax. Om de uitbrekende gegevens aan te vullen is een aantal gewassen getoetst op hun waardplantgeschikthei
Early esophageal adenocarcinoma detection using deep learning methods
Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images.
Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested.
Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83.
Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome
HIV in hiding: methods and data requirements for the estimation of the number of people living with undiagnosed HIV
Many people who are HIV positive are unaware of their infection status. Estimation of the number of people with undiagnosed HIV within a country or region is vital for understanding future need for treatment and for motivating testing programs. We review the available estimation approaches which are in current use. They can be broadly classified into those based on prevalence surveys and those based on reported HIV and AIDS cases. Estimation based on prevalence data requires data from regular prevalence surveys in different population groups together with estimates of the size of these groups. The recommended minimal case reporting data needed to estimate the number of patients with undiagnosed HIV are HIV diagnoses, including CD4 count at diagnosis and whether there has been an AIDS diagnosis in the 3 months before or after HIV diagnosis, and data on deaths in people with HIV. We would encourage all countries to implement several methods that will help develop our understanding of strengths and weaknesses of the various methods
A deep learning system for detection of early Barrett's neoplasia:a model development and validation study
BACKGROUND: Computer-aided detection (CADe) systems could assist endoscopists in detecting early neoplasia in Barrett's oesophagus, which could be difficult to detect in endoscopic images. The aim of this study was to develop, test, and benchmark a CADe system for early neoplasia in Barrett's oesophagus.METHODS: The CADe system was first pretrained with ImageNet followed by domain-specific pretraining with GastroNet. We trained the CADe system on a dataset of 14 046 images (2506 patients) of confirmed Barrett's oesophagus neoplasia and non-dysplastic Barrett's oesophagus from 15 centres. Neoplasia was delineated by 14 Barrett's oesophagus experts for all datasets. We tested the performance of the CADe system on two independent test sets. The all-comers test set comprised 327 (73 patients) non-dysplastic Barrett's oesophagus images, 82 (46 patients) neoplastic images, 180 (66 of the same patients) non-dysplastic Barrett's oesophagus videos, and 71 (45 of the same patients) neoplastic videos. The benchmarking test set comprised 100 (50 patients) neoplastic images, 300 (125 patients) non-dysplastic images, 47 (47 of the same patients) neoplastic videos, and 141 (82 of the same patients) non-dysplastic videos, and was enriched with subtle neoplasia cases. The benchmarking test set was evaluated by 112 endoscopists from six countries (first without CADe and, after 6 weeks, with CADe) and by 28 external international Barrett's oesophagus experts. The primary outcome was the sensitivity of Barrett's neoplasia detection by general endoscopists without CADe assistance versus with CADe assistance on the benchmarking test set. We compared sensitivity using a mixed-effects logistic regression model with conditional odds ratios (ORs; likelihood profile 95% CIs).FINDINGS: Sensitivity for neoplasia detection among endoscopists increased from 74% to 88% with CADe assistance (OR 2·04; 95% CI 1·73-2·42; p<0·0001 for images and from 67% to 79% [2·35; 1·90-2·94; p<0·0001] for video) without compromising specificity (from 89% to 90% [1·07; 0·96-1·19; p=0·20] for images and from 96% to 94% [0·94; 0·79-1·11; ] for video; p=0·46). In the all-comers test set, CADe detected neoplastic lesions in 95% (88-98) of images and 97% (90-99) of videos. In the benchmarking test set, the CADe system was superior to endoscopists in detecting neoplasia (90% vs 74% [OR 3·75; 95% CI 1·93-8·05; p=0·0002] for images and 91% vs 67% [11·68; 3·85-47·53; p<0·0001] for video) and non-inferior to Barrett's oesophagus experts (90% vs 87% [OR 1·74; 95% CI 0·83-3·65] for images and 91% vs 86% [2·94; 0·99-11·40] for video).INTERPRETATION: CADe outperformed endoscopists in detecting Barrett's oesophagus neoplasia and, when used as an assistive tool, it improved their detection rate. CADe detected virtually all neoplasia in a test set of consecutive cases.FUNDING: Olympus.</p
- …