30 research outputs found

    Word-level Text Highlighting of Medical Texts for Telehealth Services

    Full text link
    The medical domain is often subject to information overload. The digitization of healthcare, constant updates to online medical repositories, and increasing availability of biomedical datasets make it challenging to effectively analyze the data. This creates additional work for medical professionals who are heavily dependent on medical data to complete their research and consult their patients. This paper aims to show how different text highlighting techniques can capture relevant medical context. This would reduce the doctors' cognitive load and response time to patients by facilitating them in making faster decisions, thus improving the overall quality of online medical services. Three different word-level text highlighting methodologies are implemented and evaluated. The first method uses TF-IDF scores directly to highlight important parts of the text. The second method is a combination of TF-IDF scores and the application of Local Interpretable Model-Agnostic Explanations to classification models. The third method uses neural networks directly to make predictions on whether or not a word should be highlighted. The results of our experiments show that the neural network approach is successful in highlighting medically-relevant terms and its performance is improved as the size of the input segment increases.Comment: 33 pages, 7 figures, 2 table

    Improved α\alpha-GAN architecture for generating 3D connected volumes with an application to radiosurgery treatment planning

    Full text link
    Generative Adversarial Networks (GANs) have gained significant attention in several computer vision tasks for generating high-quality synthetic data. Various medical applications including diagnostic imaging and radiation therapy can benefit greatly from synthetic data generation due to data scarcity in the domain. However, medical image data is typically kept in 3D space, and generative models suffer from the curse of dimensionality issues in generating such synthetic data. In this paper, we investigate the potential of GANs for generating connected 3D volumes. We propose an improved version of 3D α\alpha-GAN by incorporating various architectural enhancements. On a synthetic dataset of connected 3D spheres and ellipsoids, our model can generate fully connected 3D shapes with similar geometrical characteristics to that of training data. We also show that our 3D GAN model can successfully generate high-quality 3D tumor volumes and associated treatment specifications (e.g., isocenter locations). Similar moment invariants to the training data as well as fully connected 3D shapes confirm that improved 3D α\alpha-GAN implicitly learns the training data distribution, and generates realistic-looking samples. The capability of improved 3D α\alpha-GAN makes it a valuable source for generating synthetic medical image data that can help future research in this domain

    Neural Approximate Dynamic Programming for the Ultra-fast Order Dispatching Problem

    Full text link
    Same-Day Delivery (SDD) services aim to maximize the fulfillment of online orders while minimizing delivery delays but are beset by operational uncertainties such as those in order volumes and courier planning. Our work aims to enhance the operational efficiency of SDD by focusing on the ultra-fast Order Dispatching Problem (ODP), which involves matching and dispatching orders to couriers within a centralized warehouse setting, and completing the delivery within a strict timeline (e.g., within minutes). We introduce important extensions to ultra-fast ODP such as order batching and explicit courier assignments to provide a more realistic representation of dispatching operations and improve delivery efficiency. As a solution method, we primarily focus on NeurADP, a methodology that combines Approximate Dynamic Programming (ADP) and Deep Reinforcement Learning (DRL), and our work constitutes the first application of NeurADP outside of the ride-pool matching problem. NeurADP is particularly suitable for ultra-fast ODP as it addresses complex one-to-many matching and routing intricacies through a neural network-based VFA that captures high-dimensional problem dynamics without requiring manual feature engineering as in generic ADP methods. We test our proposed approach using four distinct realistic datasets tailored for ODP and compare the performance of NeurADP against myopic and DRL baselines by also making use of non-trivial bounds to assess the quality of the policies. Our numerical results indicate that the inclusion of order batching and courier queues enhances the efficiency of delivery operations and that NeurADP significantly outperforms other methods. Detailed sensitivity analysis with important parameters confirms the robustness of NeurADP under different scenarios, including variations in courier numbers, spatial setup, vehicle capacity, and permitted delay time

    A Prompt-based Few-shot Learning Approach to Software Conflict Detection

    Full text link
    A software requirement specification (SRS) document is an essential part of the software development life cycle which outlines the requirements that a software program in development must satisfy. This document is often specified by a diverse group of stakeholders and is subject to continual change, making the process of maintaining the document and detecting conflicts between requirements an essential task in software development. Notably, projects that do not address conflicts in the SRS document early on face considerable problems later in the development life cycle. These problems incur substantial costs in terms of time and money, and these costs often become insurmountable barriers that ultimately result in the termination of a software project altogether. As a result, early detection of SRS conflicts is critical to project sustainability. The conflict detection task is approached in numerous ways, many of which require a significant amount of manual intervention from developers, or require access to a large amount of labeled, task-specific training data. In this work, we propose using a prompt-based learning approach to perform few-shot learning for conflict detection. We compare our results to supervised learning approaches that use pretrained language models, such as BERT and its variants. Our results show that prompting with just 32 labeled examples can achieve a similar level of performance in many key metrics to that of supervised learning on training sets that are magnitudes larger in size. In contrast to many other conflict detection approaches, we make no assumptions about the type of underlying requirements, allowing us to analyze pairings of both functional and non-functional requirements. This allows us to omit the potentially expensive task of filtering out non-functional requirements from our dataset.Comment: 9 pages; 4 figures. To be published In Proceedings of 32nd Annual International Conference on Computer Science and Software Engineering (CASCON '22

    Linear programming-based solution methods for constrained partially observable Markov decision processes

    Full text link
    Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various real-world phenomena. However, they are notoriously difficult to solve to optimality, and there exist only a few approximation methods for obtaining high-quality solutions. In this study, grid-based approximations are used in combination with linear programming (LP) models to generate approximate policies for CPOMDPs. A detailed numerical study is conducted with six CPOMDP problem instances considering both their finite and infinite horizon formulations. The quality of approximation algorithms for solving unconstrained POMDP problems is established through a comparative analysis with exact solution methods. Then, the performance of the LP-based CPOMDP solution approaches for varying budget levels is evaluated. Finally, the flexibility of LP-based approaches is demonstrated by applying deterministic policy constraints, and a detailed investigation into their impact on rewards and CPU run time is provided. For most of the finite horizon problems, deterministic policy constraints are found to have little impact on expected reward, but they introduce a significant increase to CPU run time. For infinite horizon problems, the reverse is observed: deterministic policies tend to yield lower expected total rewards than their stochastic counterparts, but the impact of deterministic constraints on CPU run time is negligible in this case. Overall, these results demonstrate that LP models can effectively generate approximate policies for both finite and infinite horizon problems while providing the flexibility to incorporate various additional constraints into the underlying model.Comment: 42 pages, 8 figure

    Interpretable Time Series Clustering Using Local Explanations

    Full text link
    This study focuses on exploring the use of local interpretability methods for explaining time series clustering models. Many of the state-of-the-art clustering models are not directly explainable. To provide explanations for these clustering algorithms, we train classification models to estimate the cluster labels. Then, we use interpretability methods to explain the decisions of the classification models. The explanations are used to obtain insights into the clustering models. We perform a detailed numerical study to test the proposed approach on multiple datasets, clustering models, and classification models. The analysis of the results shows that the proposed approach can be used to explain time series clustering models, specifically when the underlying classification model is accurate. Lastly, we provide a detailed analysis of the results, discussing how our approach can be used in a real-life scenario
    corecore