226 research outputs found
DeformableFormer: Classification of Endoscopic Ultrasound Guided Fine Needle Biopsy in Pancreatic Diseases
Endoscopic Ultrasound-Fine Needle Aspiration (EUS-FNA) is used to examine
pancreatic cancer. EUS-FNA is an examination using EUS to insert a thin needle
into the tumor and collect pancreatic tissue fragments. Then collected
pancreatic tissue fragments are then stained to classify whether they are
pancreatic cancer. However, staining and visual inspection are time consuming.
In addition, if the pancreatic tissue fragment cannot be examined after
staining, the collection must be done again on the other day. Therefore, our
purpose is to classify from an unstained image whether it is available for
examination or not, and to exceed the accuracy of visual classification by
specialist physicians. Image classification before staining can reduce the time
required for staining and the burden of patients. However, the images of
pancreatic tissue fragments used in this study cannot be successfully
classified by processing the entire image because the pancreatic tissue
fragments are only a part of the image. Therefore, we propose a
DeformableFormer that uses Deformable Convolution in MetaFormer framework. The
architecture consists of a generalized model of the Vision Transformer, and we
use Deformable Convolution in the TokenMixer part. In contrast to existing
approaches, our proposed DeformableFormer is possible to perform feature
extraction more locally and dynamically by Deformable Convolution. Therefore,
it is possible to perform suitable feature extraction for classifying target.
To evaluate our method, we classify two categories of pancreatic tissue
fragments; available and unavailable for examination. We demonstrated that our
method outperformed the accuracy by specialist physicians and conventional
methods
Improving Domain Generalization by Learning without Forgetting: Application in Retail Checkout
Designing an automatic checkout system for retail stores at the human level
accuracy is challenging due to similar appearance products and their various
poses. This paper addresses the problem by proposing a method with a two-stage
pipeline. The first stage detects class-agnostic items, and the second one is
dedicated to classify product categories. We also track the objects across
video frames to avoid duplicated counting. One major challenge is the domain
gap because the models are trained on synthetic data but tested on the real
images. To reduce the error gap, we adopt domain generalization methods for the
first-stage detector. In addition, model ensemble is used to enhance the
robustness of the 2nd-stage classifier. The method is evaluated on the AI City
challenge 2022 -- Track 4 and gets the F1 score on the test A set. Code
is released at the link https://github.com/cybercore-co-ltd/aicity22-track4
Verifiable and Energy Efficient Medical Image Analysis with Quantised Self-attentive Deep Neural Networks
Convolutional Neural Networks have played a significant role in various
medical imaging tasks like classification and segmentation. They provide
state-of-the-art performance compared to classical image processing algorithms.
However, the major downside of these methods is the high computational
complexity, reliance on high-performance hardware like GPUs and the inherent
black-box nature of the model. In this paper, we propose quantised stand-alone
self-attention based models as an alternative to traditional CNNs. In the
proposed class of networks, convolutional layers are replaced with stand-alone
self-attention layers, and the network parameters are quantised after training.
We experimentally validate the performance of our method on classification and
segmentation tasks. We observe a reduction in model size,
lesser number of parameters, fewer FLOPs and more energy
efficiency during inference on CPUs. The code will be available at \href
{https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}{https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}.Comment: Accepted at MICCAI 2022 FAIR Worksho
- β¦