24,354 research outputs found

    A Real-Time Remote IDS Testbed for Connected Vehicles

    Full text link
    Connected vehicles are becoming commonplace. A constant connection between vehicles and a central server enables new features and services. This added connectivity raises the likelihood of exposure to attackers and risks unauthorized access. A possible countermeasure to this issue are intrusion detection systems (IDS), which aim at detecting these intrusions during or after their occurrence. The problem with IDS is the large variety of possible approaches with no sensible option for comparing them. Our contribution to this problem comprises the conceptualization and implementation of a testbed for an automotive real-world scenario. That amounts to a server-side IDS detecting intrusions into vehicles remotely. To verify the validity of our approach, we evaluate the testbed from multiple perspectives, including its fitness for purpose and the quality of the data it generates. Our evaluation shows that the testbed makes the effective assessment of various IDS possible. It solves multiple problems of existing approaches, including class imbalance. Additionally, it enables reproducibility and generating data of varying detection difficulties. This allows for comprehensive evaluation of real-time, remote IDS.Comment: Peer-reviewed version accepted for publication in the proceedings of the 34th ACM/SIGAPP Symposium On Applied Computing (SAC'19

    Learning signals of adverse drug-drug interactions from the unstructured text of electronic health records.

    Get PDF
    Drug-drug interactions (DDI) account for 30% of all adverse drug reactions, which are the fourth leading cause of death in the US. Current methods for post marketing surveillance primarily use spontaneous reporting systems for learning DDI signals and validate their signals using the structured portions of Electronic Health Records (EHRs). We demonstrate a fast, annotation-based approach, which uses standard odds ratios for identifying signals of DDIs from the textual portion of EHRs directly and which, to our knowledge, is the first effort of its kind. We developed a gold standard of 1,120 DDIs spanning 14 adverse events and 1,164 drugs. Our evaluations on this gold standard using millions of clinical notes from the Stanford Hospital confirm that identifying DDI signals from clinical text is feasible (AUROC=81.5%). We conclude that the text in EHRs contain valuable information for learning DDI signals and has enormous utility in drug surveillance and clinical decision support

    Electronic fraud detection in the U.S. Medicaid Healthcare Program: lessons learned from other industries

    Get PDF
    It is estimated that between 600and600 and 850 billion annually is lost to fraud, waste, and abuse in the US healthcare system,with 125to125 to 175 billion of this due to fraudulent activity (Kelley 2009). Medicaid, a state-run, federally-matchedgovernment program which accounts for roughly one-quarter of all healthcare expenses in the US, has been particularlysusceptible targets for fraud in recent years. With escalating overall healthcare costs, payers, especially government-runprograms, must seek savings throughout the system to maintain reasonable quality of care standards. As such, the need foreffective fraud detection and prevention is critical. Electronic fraud detection systems are widely used in the insurance,telecommunications, and financial sectors. What lessons can be learned from these efforts and applied to improve frauddetection in the Medicaid health care program? In this paper, we conduct a systematic literature study to analyze theapplicability of existing electronic fraud detection techniques in similar industries to the US Medicaid program

    Active learning in annotating micro-blogs dealing with e-reputation

    Full text link
    Elections unleash strong political views on Twitter, but what do people really think about politics? Opinion and trend mining on micro blogs dealing with politics has recently attracted researchers in several fields including Information Retrieval and Machine Learning (ML). Since the performance of ML and Natural Language Processing (NLP) approaches are limited by the amount and quality of data available, one promising alternative for some tasks is the automatic propagation of expert annotations. This paper intends to develop a so-called active learning process for automatically annotating French language tweets that deal with the image (i.e., representation, web reputation) of politicians. Our main focus is on the methodology followed to build an original annotated dataset expressing opinion from two French politicians over time. We therefore review state of the art NLP-based ML algorithms to automatically annotate tweets using a manual initiation step as bootstrap. This paper focuses on key issues about active learning while building a large annotated data set from noise. This will be introduced by human annotators, abundance of data and the label distribution across data and entities. In turn, we show that Twitter characteristics such as the author's name or hashtags can be considered as the bearing point to not only improve automatic systems for Opinion Mining (OM) and Topic Classification but also to reduce noise in human annotations. However, a later thorough analysis shows that reducing noise might induce the loss of crucial information.Comment: Journal of Interdisciplinary Methodologies and Issues in Science - Vol 3 - Contextualisation digitale - 201
    • …
    corecore