2 research outputs found
See Through the Fog: Curriculum Learning with Progressive Occlusion in Medical Imaging
In recent years, deep learning models have revolutionized medical image
interpretation, offering substantial improvements in diagnostic accuracy.
However, these models often struggle with challenging images where critical
features are partially or fully occluded, which is a common scenario in
clinical practice. In this paper, we propose a novel curriculum learning-based
approach to train deep learning models to handle occluded medical images
effectively. Our method progressively introduces occlusion, starting from
clear, unobstructed images and gradually moving to images with increasing
occlusion levels. This ordered learning process, akin to human learning, allows
the model to first grasp simple, discernable patterns and subsequently build
upon this knowledge to understand more complicated, occluded scenarios.
Furthermore, we present three novel occlusion synthesis methods, namely
Wasserstein Curriculum Learning (WCL), Information Adaptive Learning (IAL), and
Geodesic Curriculum Learning (GCL). Our extensive experiments on diverse
medical image datasets demonstrate substantial improvements in model robustness
and diagnostic accuracy over conventional training methodologies.Comment: 20 pages, 3 figures, 1 tabl
Transcending Grids: Point Clouds and Surface Representations Powering Neurological Processing
In healthcare, accurately classifying medical images is vital, but
conventional methods often hinge on medical data with a consistent grid
structure, which may restrict their overall performance. Recent medical
research has been focused on tweaking the architectures to attain better
performance without giving due consideration to the representation of data. In
this paper, we present a novel approach for transforming grid based data into
its higher dimensional representations, leveraging unstructured point cloud
data structures. We first generate a sparse point cloud from an image by
integrating pixel color information as spatial coordinates. Next, we construct
a hypersurface composed of points based on the image dimensions, with each
smooth section within this hypersurface symbolizing a specific pixel location.
Polygonal face construction is achieved using an adjacency tensor. Finally, a
dense point cloud is generated by densely sampling the constructed
hypersurface, with a focus on regions of higher detail. The effectiveness of
our approach is demonstrated on a publicly accessible brain tumor dataset,
achieving significant improvements over existing classification techniques.
This methodology allows the extraction of intricate details from the original
image, opening up new possibilities for advanced image analysis and processing
tasks