1,192 research outputs found
Constructing Ontology-Based Cancer Treatment Decision Support System with Case-Based Reasoning
Decision support is a probabilistic and quantitative method designed for
modeling problems in situations with ambiguity. Computer technology can be
employed to provide clinical decision support and treatment recommendations.
The problem of natural language applications is that they lack formality and
the interpretation is not consistent. Conversely, ontologies can capture the
intended meaning and specify modeling primitives. Disease Ontology (DO) that
pertains to cancer's clinical stages and their corresponding information
components is utilized to improve the reasoning ability of a decision support
system (DSS). The proposed DSS uses Case-Based Reasoning (CBR) to consider
disease manifestations and provides physicians with treatment solutions from
similar previous cases for reference. The proposed DSS supports natural
language processing (NLP) queries. The DSS obtained 84.63% accuracy in disease
classification with the help of the ontology
GDN: A Stacking Network Used for Skin Cancer Diagnosis
Skin cancer, the primary type of cancer that can be identified by visual
recognition, requires an automatic identification system that can accurately
classify different types of lesions. This paper presents GoogLe-Dense Network
(GDN), which is an image-classification model to identify two types of skin
cancer, Basal Cell Carcinoma, and Melanoma. GDN uses stacking of different
networks to enhance the model performance. Specifically, GDN consists of two
sequential levels in its structure. The first level performs basic
classification tasks accomplished by GoogLeNet and DenseNet, which are trained
in parallel to enhance efficiency. To avoid low accuracy and long training
time, the second level takes the output of the GoogLeNet and DenseNet as the
input for a logistic regression model. We compare our method with four baseline
networks including ResNet, VGGNet, DenseNet, and GoogLeNet on the dataset, in
which GoogLeNet and DenseNet significantly outperform ResNet and VGGNet. In the
second level, different stacking methods such as perceptron, logistic
regression, SVM, decision trees and K-neighbor are studied in which Logistic
Regression shows the best prediction result among all. The results prove that
GDN, compared to a single network structure, has higher accuracy in optimizing
skin cancer detection.Comment: Published at ICSPS 202
Confidence-and-Refinement Adaptation Model for Cross-Domain Semantic Segmentation
With the rapid development of convolutional neural networks (CNNs), significant progress has been achieved in semantic segmentation. Despite the great success, such deep learning approaches require large scale real-world datasets with pixel-level annotations. However, considering that pixel-level labeling of semantics is extremely laborious, many researchers turn to utilize synthetic data with free annotations. But due to the clear domain gap, the segmentation model trained with the synthetic images tends to perform poorly on the real-world datasets. Unsupervised domain adaptation (UDA) for semantic segmentation recently gains an increasing research attention, which aims at alleviating the domain discrepancy. Existing methods in this scope either simply align features or the outputs across the source and target domains or have to deal with the complex image processing and post-processing problems. In this work, we propose a novel multi-level UDA model named Confidence-and-Refinement Adaptation Model (CRAM), which contains a confidence-aware entropy alignment (CEA) module and a style feature alignment (SFA) module. Through CEA, the adaptation is done locally via adversarial learning in the output space, making the segmentation model pay attention to the high-confident predictions. Furthermore, to enhance the model transfer in the shallow feature space, the SFA module is applied to minimize the appearance gap across domains. Experiments on two challenging UDA benchmarks ``GTA5-to-Cityscapes'' and ``SYNTHIA-to-Cityscapes'' demonstrate the effectiveness of CRAM. We achieve comparable performance with the existing state-of-the-art works with advantages in simplicity and convergence speed
You Never Cluster Alone
Recent advances in self-supervised learning with instance-level contrastive objectives facilitate unsupervised clustering. However, a standalone datum is not perceiving the context of the holistic cluster, and may undergo sub-optimal assignment. In this paper, we extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation that encodes the context of each data group. Contrastive learning with this representation then rewards the assignment of each datum. To implement this vision, we propose twin-contrast clustering (TCC). We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one. On one hand, with the corresponding assignment variables being the weight, a weighted aggregation along the data points implements the set representation of a cluster. We further propose heuristic cluster augmentation equivalents to enable cluster-level contrastive learning. On the other hand, we derive the evidence lower-bound of the instance-level contrastive objective with the assignments. By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps. Extensive experiments show that TCC outperforms the state-of-the-art on benchmarked datasets
Privileged Anatomical and Protocol Discrimination in Trackerless 3D Ultrasound Reconstruction
Three-dimensional (3D) freehand ultrasound (US) reconstruction without using
any additional external tracking device has seen recent advances with deep
neural networks (DNNs). In this paper, we first investigated two identified
contributing factors of the learned inter-frame correlation that enable the
DNN-based reconstruction: anatomy and protocol. We propose to incorporate the
ability to represent these two factors - readily available during training - as
the privileged information to improve existing DNN-based methods. This is
implemented in a new multi-task method, where the anatomical and protocol
discrimination are used as auxiliary tasks. We further develop a differentiable
network architecture to optimise the branching location of these auxiliary
tasks, which controls the ratio between shared and task-specific network
parameters, for maximising the benefits from the two auxiliary tasks.
Experimental results, on a dataset with 38 forearms of 19 volunteers acquired
with 6 different scanning protocols, show that 1) both anatomical and protocol
variances are enabling factors for DNN-based US reconstruction; 2) learning how
to discriminate different subjects (anatomical variance) and predefined types
of scanning paths (protocol variance) both significantly improve frame
prediction accuracy, volume reconstruction overlap, accumulated tracking error
and final drift, using the proposed algorithm.Comment: Accepted to Advances in Simplifying Medical UltraSound (ASMUS)
workshop at MICCAI 202
Trackerless freehand ultrasound with sequence modelling and auxiliary transformation over past and future frames
Three-dimensional (3D) freehand ultrasound (US) reconstruction without a
tracker can be advantageous over its two-dimensional or tracked counterparts in
many clinical applications. In this paper, we propose to estimate 3D spatial
transformation between US frames from both past and future 2D images, using
feed-forward and recurrent neural networks (RNNs). With the temporally
available frames, a further multi-task learning algorithm is proposed to
utilise a large number of auxiliary transformation-predicting tasks between
them. Using more than 40,000 US frames acquired from 228 scans on 38 forearms
of 19 volunteers in a volunteer study, the hold-out test performance is
quantified by frame prediction accuracy, volume reconstruction overlap,
accumulated tracking error and final drift, based on ground-truth from an
optical tracker. The results show the importance of modelling the
temporal-spatially correlated input frames as well as output transformations,
with further improvement owing to additional past and/or future frames. The
best performing model was associated with predicting transformation between
moderately-spaced frames, with an interval of less than ten frames at 20 frames
per second (fps). Little benefit was observed by adding frames more than one
second away from the predicted transformation, with or without LSTM-based RNNs.
Interestingly, with the proposed approach, explicit within-sequence loss that
encourages consistency in composing transformations or minimises accumulated
error may no longer be required. The implementation code and volunteer data
will be made publicly available ensuring reproducibility and further research.Comment: 10 pages, 4 figures, Paper submitted to IEEE International Symposium
on Biomedical Imaging (ISBI
SelfâPropelled Initiative Collision at Microelectrodes with Vertically Mobile Micromotors
Impact experiments enable single particle analysis for many applications. However, the effect of the trajectory of a particle to an electrode on impact signals still requires further exploration. Here, we investigate the particle impact measurements versus motion using micromotors with controllable vertical motion. With biocatalytic cascade reactions, the micromotor system utilizes buoyancy as the driving force, thus enabling more regulated interactions with the electrode. With the aid of numerical simulations, the dynamic interactions between the electrode and micromotors are categorized into four representative patterns: approaching, departing, approachingâandâdeparting, and departingâandâreapproaching, which correspond well with the experimentally observed impact signals. This study offers a possibility of exploring the dynamic interactions between the electrode and particles, shedding light on the design of new electrochemical sensors
- âŠ