935 research outputs found

    Constructing Ontology-Based Cancer Treatment Decision Support System with Case-Based Reasoning

    Full text link
    Decision support is a probabilistic and quantitative method designed for modeling problems in situations with ambiguity. Computer technology can be employed to provide clinical decision support and treatment recommendations. The problem of natural language applications is that they lack formality and the interpretation is not consistent. Conversely, ontologies can capture the intended meaning and specify modeling primitives. Disease Ontology (DO) that pertains to cancer's clinical stages and their corresponding information components is utilized to improve the reasoning ability of a decision support system (DSS). The proposed DSS uses Case-Based Reasoning (CBR) to consider disease manifestations and provides physicians with treatment solutions from similar previous cases for reference. The proposed DSS supports natural language processing (NLP) queries. The DSS obtained 84.63% accuracy in disease classification with the help of the ontology

    Confidence-and-Refinement Adaptation Model for Cross-Domain Semantic Segmentation

    Get PDF
    With the rapid development of convolutional neural networks (CNNs), significant progress has been achieved in semantic segmentation. Despite the great success, such deep learning approaches require large scale real-world datasets with pixel-level annotations. However, considering that pixel-level labeling of semantics is extremely laborious, many researchers turn to utilize synthetic data with free annotations. But due to the clear domain gap, the segmentation model trained with the synthetic images tends to perform poorly on the real-world datasets. Unsupervised domain adaptation (UDA) for semantic segmentation recently gains an increasing research attention, which aims at alleviating the domain discrepancy. Existing methods in this scope either simply align features or the outputs across the source and target domains or have to deal with the complex image processing and post-processing problems. In this work, we propose a novel multi-level UDA model named Confidence-and-Refinement Adaptation Model (CRAM), which contains a confidence-aware entropy alignment (CEA) module and a style feature alignment (SFA) module. Through CEA, the adaptation is done locally via adversarial learning in the output space, making the segmentation model pay attention to the high-confident predictions. Furthermore, to enhance the model transfer in the shallow feature space, the SFA module is applied to minimize the appearance gap across domains. Experiments on two challenging UDA benchmarks ``GTA5-to-Cityscapes'' and ``SYNTHIA-to-Cityscapes'' demonstrate the effectiveness of CRAM. We achieve comparable performance with the existing state-of-the-art works with advantages in simplicity and convergence speed

    You Never Cluster Alone

    Get PDF
    Recent advances in self-supervised learning with instance-level contrastive objectives facilitate unsupervised clustering. However, a standalone datum is not perceiving the context of the holistic cluster, and may undergo sub-optimal assignment. In this paper, we extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation that encodes the context of each data group. Contrastive learning with this representation then rewards the assignment of each datum. To implement this vision, we propose twin-contrast clustering (TCC). We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one. On one hand, with the corresponding assignment variables being the weight, a weighted aggregation along the data points implements the set representation of a cluster. We further propose heuristic cluster augmentation equivalents to enable cluster-level contrastive learning. On the other hand, we derive the evidence lower-bound of the instance-level contrastive objective with the assignments. By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps. Extensive experiments show that TCC outperforms the state-of-the-art on benchmarked datasets

    Privileged Anatomical and Protocol Discrimination in Trackerless 3D Ultrasound Reconstruction

    Full text link
    Three-dimensional (3D) freehand ultrasound (US) reconstruction without using any additional external tracking device has seen recent advances with deep neural networks (DNNs). In this paper, we first investigated two identified contributing factors of the learned inter-frame correlation that enable the DNN-based reconstruction: anatomy and protocol. We propose to incorporate the ability to represent these two factors - readily available during training - as the privileged information to improve existing DNN-based methods. This is implemented in a new multi-task method, where the anatomical and protocol discrimination are used as auxiliary tasks. We further develop a differentiable network architecture to optimise the branching location of these auxiliary tasks, which controls the ratio between shared and task-specific network parameters, for maximising the benefits from the two auxiliary tasks. Experimental results, on a dataset with 38 forearms of 19 volunteers acquired with 6 different scanning protocols, show that 1) both anatomical and protocol variances are enabling factors for DNN-based US reconstruction; 2) learning how to discriminate different subjects (anatomical variance) and predefined types of scanning paths (protocol variance) both significantly improve frame prediction accuracy, volume reconstruction overlap, accumulated tracking error and final drift, using the proposed algorithm.Comment: Accepted to Advances in Simplifying Medical UltraSound (ASMUS) workshop at MICCAI 202

    Trackerless freehand ultrasound with sequence modelling and auxiliary transformation over past and future frames

    Get PDF
    Three-dimensional (3D) freehand ultrasound (US) reconstruction without a tracker can be advantageous over its two-dimensional or tracked counterparts in many clinical applications. In this paper, we propose to estimate 3D spatial transformation between US frames from both past and future 2D images, using feed-forward and recurrent neural networks (RNNs). With the temporally available frames, a further multi-task learning algorithm is proposed to utilise a large number of auxiliary transformation-predicting tasks between them. Using more than 40,000 US frames acquired from 228 scans on 38 forearms of 19 volunteers in a volunteer study, the hold-out test performance is quantified by frame prediction accuracy, volume reconstruction overlap, accumulated tracking error and final drift, based on ground-truth from an optical tracker. The results show the importance of modelling the temporal-spatially correlated input frames as well as output transformations, with further improvement owing to additional past and/or future frames. The best performing model was associated with predicting transformation between moderately-spaced frames, with an interval of less than ten frames at 20 frames per second (fps). Little benefit was observed by adding frames more than one second away from the predicted transformation, with or without LSTM-based RNNs. Interestingly, with the proposed approach, explicit within-sequence loss that encourages consistency in composing transformations or minimises accumulated error may no longer be required. The implementation code and volunteer data will be made publicly available ensuring reproducibility and further research.Comment: 10 pages, 4 figures, Paper submitted to IEEE International Symposium on Biomedical Imaging (ISBI

    ATM Transaction Status Feature Analysis and Anomaly Detection

    Get PDF
    In this paper, based on ATM transaction status analysis and anomaly detection problem, by analyzing the transaction statistics of a bank ATM application system, the characteristic parameters of ATM transaction status are extracted and analyzed, then a set of targets are designed. An abnormal monitoring scheme that can promptly and accurately alarm the four abnormal situations in which the trading volume is steep, the transaction failure rate is increased, the transaction processing is slow, and the transaction response time is too long. Firstly, the transaction data is divided to distinguish between working days, non-working days, trading volume troughs, and normal trading periods, to avoid data interference between different time periods and to take into account the data discontinuity. The characteristics of the anomaly data were identified by K-Means Clustering Analysis. Then the data is analyzed by B-P Neural Network method, the change rule of ATM transaction status with time is obtained. According to this rule, the ATM transaction status is judged, and the abnormal situation is alarmed in time. Finally, this paper increases the amount and type of data collected, then increases the influencing factors such as ATM popularity, holidays, transaction types into the model, uses the existing transaction data before and after the Spring Festival to verify, in order to obtain a more realistic monitoring and early warning program. The transaction status anomaly monitoring scheme designed in this paper not only can correctly judge but also timely alarm the financial aid equipment failure scenarios, so that the security of the financial self-service equipment trading system is guaranteed

    Enhanced Gene Transfection Efficacy and Safety Through Granular Hydrogel Mediated Gene Delivery Process

    Get PDF
    Although gene therapy has made great achievements in both laboratory research and clinical translation, there are still challenges such as limited control of drug pharmacokinetics, acute toxicity, poor tissue retention, insufficient efficacy, and inconsistent clinical translation. Herein, a gene therapy gel is formulated by directly redispersing polyplex nanoparticles into granular hydrogels without any gelation pre-treatment, which provides great convenience for storage, dosing and administration. In vitro studies have shown that use of granular hydrogels can regulate the gene drug release, reduce dose dependent toxicity and help improve transfection efficacy. Moreover, the developed gene therapy gel is easy to operate and can be directly used in vitro to evaluate its synergistic efficacy with various gene delivery systems. As such, it represents a major advance over many conventional excipient-based formulations, and new regulatory strategies for gene therapy may be inspired by it
    • …
    corecore