7 research outputs found

    Automated Identification of Unhealthy Drinking Using Routinely Collected Data: A Machine Learning Approach

    Get PDF
    Background: Unhealthy drinking is prevalent in the United States and can lead to serious health and social consequences, yet it is under-diagnosed and under-treated. Identifying unhealthy drinkers can be time-consuming for primary care providers. An automated tool for identification would allow attention to be focused on patients most likely to need care and therefore increase efficiency and effectiveness. Objectives: To build a clinical prediction tool for unhealthy drinking based solely on routinely collected demographic and laboratory data. Methods: We obtained demographic and laboratory data on 89,325 adults seen at the University of Vermont Medical Center from 2011-2017. Logistic regression, support vector machines (SVM), k-nearest neighbor, and random forests were each used to build clinical prediction models. The model with the largest area under the receiver operator curve (AUC) was selected. Results: SVM with polynomials of degree 3 produced the largest AUC. The most influential predictors were alkaline phosphatase, gender, glucose, and serum bicarbonate. The optimum operating point had sensitivity 31.1%, specificity 91.2%, positive predictive value 50.4%, and negative predictive value 82.1%. Application of the tool increased the prevalence of unhealthy drinking from 18.3% to 32.4%, while reducing the target population by 22%. Limitations: Universal screening was not used during the time data was collected. The prevalence of unhealthy drinking among those screened was 60% suggesting the AUDIT-C was administered to confirm rather than screen for unhealthy drinking. Conclusion: An automated tool, using commonly available data, can identify a subset of patients who appear to warrant clinical attention for unhealthy drinking

    Adaptive Agents and Data Quality in Agent-Based Financial Markets

    Full text link
    We present our Agent-Based Market Microstructure Simulation (ABMMS), an Agent-Based Financial Market (ABFM) that captures much of the complexity present in the US National Market System for equities (NMS). Agent-Based models are a natural choice for understanding financial markets. Financial markets feature a constrained action space that should simplify model creation, produce a wealth of data that should aid model validation, and a successful ABFM could strongly impact system design and policy development processes. Despite these advantages, ABFMs have largely remained an academic novelty. We hypothesize that two factors limit the usefulness of ABFMs. First, many ABFMs fail to capture relevant microstructure mechanisms, leading to differences in the mechanics of trading. Second, the simple agents that commonly populate ABFMs do not display the breadth of behaviors observed in human traders or the trading systems that they create. We investigate these issues through the development of ABMMS, which features a fragmented market structure, communication infrastructure with propagation delays, realistic auction mechanisms, and more. As a baseline, we populate ABMMS with simple trading agents and investigate properties of the generated data. We then compare the baseline with experimental conditions that explore the impacts of market topology or meta-reinforcement learning agents. The combination of detailed market mechanisms and adaptive agents leads to models whose generated data more accurately reproduce stylized facts observed in actual markets. These improvements increase the utility of ABFMs as tools to inform design and policy decisions.Comment: 11 pages, 6 figures, and 1 table. Contains 12 pages of supplemental information with 1 figure and 22 table

    ARTS: Automotive Repository of Traffic Signs for the United States

    No full text

    Cross-View Geo-Localization via Learning Disentangled Geometric Layout Correspondence

    No full text
    Cross-view geo-localization aims to estimate the location of a query ground image by matching it to a reference geo-tagged aerial images database. As an extremely challenging task, its difficulties root in the drastic view changes and different capturing time between two views. Despite these difficulties, recent works achieve outstanding progress on cross-view geo-localization benchmarks. However, existing methods still suffer from poor performance on the cross-area benchmarks, in which the training and testing data are captured from two different regions. We attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models' overfitting on low-level details from the training set. In this paper, we propose GeoDTR which explicitly disentangles geometric information from raw features and learns the spatial correlations among visual features from aerial and ground pairs with a novel geometric layout extractor module. This module generates a set of geometric layout descriptors, modulating the raw features and producing high-quality latent representations. In addition, we elaborate on two categories of data augmentations, (i) Layout simulation, which varies the spatial configuration while keeping the low-level details intact. (ii) Semantic augmentation, which alters the low-level details and encourages the model to capture spatial configurations. These augmentations help to improve the performance of the cross-view geo-localization models, especially on the cross-area benchmarks. Moreover, we propose a counterfactual-based learning process to benefit the geometric layout extractor in exploring spatial information. Extensive experiments show that GeoDTR not only achieves state-of-the-art results but also significantly boosts the performance on same-area and cross-area benchmarks. Our code can be found at https://gitlab.com/vail-uvm/geodtr

    Object Tracking and Geo-Localization from Street Images

    No full text
    Object geo-localization from images is crucial to many applications such as land surveying, self-driving, and asset management. Current visual object geo-localization algorithms suffer from hardware limitations and impractical assumptions limiting their usability in real-world applications. Most of the current methods assume object sparsity, the presence of objects in at least two frames, and most importantly they only support a single class of objects. In this paper, we present a novel two-stage technique that detects and geo-localizes dense, multi-class objects such as traffic signs from street videos. Our algorithm is able to handle low frame rate inputs in which objects might be missing in one or more frames. We propose a detector that is not only able to detect objects in images, but also predicts a positional offset for each object relative to the camera GPS location. We also propose a novel tracker algorithm that is able to track a large number of multi-class objects. Many current geo-localization datasets require specialized hardware, suffer from idealized assumptions not representative of reality, and are often not publicly available. In this paper, we propose a public dataset called ARTSv2, which is an extension of ARTS dataset that covers a diverse set of roads in widely varying environments to ensure it is representative of real-world scenarios. Our dataset will both support future research and provide a crucial benchmark for the field
    corecore