3,335 research outputs found

    An adaptable fuzzy-based model for predicting link quality in robot networks.

    Get PDF
    It is often essential for robots to maintain wireless connectivity with other systems so that commands, sensor data, and other situational information can be exchanged. Unfortunately, maintaining sufficient connection quality between these systems can be problematic. Robot mobility, combined with the attenuation and rapid dynamics associated with radio wave propagation, can cause frequent link quality (LQ) issues such as degraded throughput, temporary disconnects, or even link failure. In order to proactively mitigate such problems, robots must possess the capability, at the application layer, to gauge the quality of their wireless connections. However, many of the existing approaches lack adaptability or the framework necessary to rapidly build and sustain an accurate LQ prediction model. The primary contribution of this dissertation is the introduction of a novel way of blending machine learning with fuzzy logic so that an adaptable, yet intuitive LQ prediction model can be formed. Another significant contribution includes the evaluation of a unique active and incremental learning framework for quickly constructing and maintaining prediction models in robot networks with minimal sampling overhead

    Automatic methods for low-cost evaluation and position-aware models for neural information retrieval

    Get PDF
    An information retrieval (IR) system assists people in consuming huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the usage of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labeling method is proposed to pick out a small subset of representative documents for manual judgments in favor of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns like term dependency, query proximity, density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets.Ein Information-Retrieval (IR) System hilft Menschen bei der Arbeit mit großen Datenmengen, daher ist die Entwicklung und Evaluation solcher Systeme wichtig. Allerdings gibt es zwei Herausforderungen: die große Anzahl von Anfrage-Dokument-Paaren, die manuelle IREvaluation schwierig macht; sowie die komplizierten zu modellierenden Muster, aufgrund der nicht-symmetrischen, heterogenen Beziehung zwischen einem Anfragen und Dokumenten, wo erwiesen ist dass verschiedene Interaktionsmuster wie Termabhängigkeiten und Termnähe wichtig sind, aber nicht einfach durch ein einzelnes IR-Modell zu erfassen sind. In dieser Dissertation versuchen wir, beide Herausforderungen aus der Perspektive der IR-Evaluation bzw. der IR-Modellierung anzugehen, indem wir die manuellen Kosten mit automatischen Methoden reduzieren, indem wir die Verwendung von Crowdsourcing bei der Erfassung von Präferenzbewertungen untersuchen und indem wir neue neuronale IR-Modelle vorschlagen. Um die große Anzahl von Anfrage-Dokument-Paaren in der IR-Evaluation in Angriff zu nehmen, schlagen wir eine kostengünstige selektive Bewertungsmethode vor, die nur eine kleine Untermenge von repräsentativen Dokumenten für manuelle Beurteilungen auswählt, deren Ergebnisse dann extrapoliert werden; darüber hinaus wird ein unüberwachtes sprachmodellbasiertes Gütemaß für Neuheit und Diversität vorgeschlagen, wobei der Inhalt der bewerteten Dokumente genutzt wird, um unvollständige Bewertungen zu kompensieren. Außerdem versuchen wir Präferenzbewertungen praktisch nutzbar zu machen, indem wir empirisch verschiedene Eigenschaften der Bewertungen beim Sammeln über Crowdsourcing untersuchen, und indem wir einen neuartigen Bewertungsmechanismus entwickeln, der einen Kompromiss zwischen der Bewertungsqualität und der Anzahl der Bewertungen macht. Abschließend, um verschiedene komplizierte Muster in einem einzigen IR-Modell zu erfassen, inspiriert von den jüngsten Fortschritten bei Deep-Learning-Verfahren, entwickeln wir neuartige neuronale IR-Modelle, die verschiedene Muster wie Termabhängigkeit, Termnähe, Relevanzdichte sowie Anfrageabdeckung in einem einzelnen IR-Modell integrieren. Experimente auf verschiedenen Datensätzen zeigen die überlegene Performance des vorgeschlagenen IR-Modells

    Leveraging Large Language Models for Automated Dialogue Analysis

    Full text link
    Developing high-performing dialogue systems benefits from the automatic identification of undesirable behaviors in system responses. However, detecting such behaviors remains challenging, as it draws on a breadth of general knowledge and understanding of conversational practices. Although recent research has focused on building specialized classifiers for detecting specific dialogue behaviors, the behavior coverage is still incomplete and there is a lack of testing on real-world human-bot interactions. This paper investigates the ability of a state-of-the-art large language model (LLM), ChatGPT-3.5, to perform dialogue behavior detection for nine categories in real human-bot dialogues. We aim to assess whether ChatGPT can match specialized models and approximate human performance, thereby reducing the cost of behavior detection tasks. Our findings reveal that neither specialized models nor ChatGPT have yet achieved satisfactory results for this task, falling short of human performance. Nevertheless, ChatGPT shows promising potential and often outperforms specialized detection models. We conclude with an in-depth examination of the prevalent shortcomings of ChatGPT, offering guidance for future research to enhance LLM capabilities.Comment: Accepted to SIGDIAL 202

    Bridging the Divide White Paper on Medication Abortion: Overview of Research & Policy in the United States

    Get PDF
    Medication abortion (also called medical abortion) is a safe method of abortion available for the past 15 years in the US. The Bridging the Divide white paper summarizes the scientific evidence related to the current medication abortion process and potential changes to the process that could make it even safer and more accessible for patients, as well as policy considerations and directions for future research. In the fall of 2000, the US Food & Drug Administration (FDA) approved the drug Mifeprex© (generic: mifepristone) for use in medication abortions. That approval included requirements that affect both patients and providers and that are far more specific than typical requirements for prescription drugs. The package insert (also known as the product label) indicated procedures for mifepristone prescribers to follow, based on the regimen used during the drug’s pre-approval clinical trials. FDA has not approved any other abortion drugs besides Mifeprex. Fifteen years later, in March 2016, the FDA approved an updated label for Mifeprex, marking an important step forward for access to abortion care and for evidence-based policy. Although the new label is progress toward policy that is informed and driven by scientific research, the change came many years after research data had demonstrated the safety and efficacy of widely used evidence-based protocols. In the intervening years, some states took advantage of the outdated requirements in the product label and implemented restrictive policy measures that prevented their residents from accessing care based on the latest evidence and best practice. The Bridging the Divide white paper on the current state of medication abortion evidence and policy can be found below, along with a shorter summary document for policy-makers and a recently published commentary from the journal Women’s Health Issues

    Seven Pillars of a New Evidentiary Paradigm: The Food, Drug, and Cosmetic Act Enters the Genomic Era

    Get PDF
    To assess the impact of the March 2009 decision in Wyeth v. Levine, it is crucial to understand that the Supreme Court ruled on actions that the U.S. Food and Drug Administration (FDA) took under a statutory scheme that already had been amended by the time the case was decided. The Food and Drug Administration Amendments Act of 2007 (FDAAA) transformed drug regulation, adding significant new powers to develop evidence and make new types of decisions in the postmarket period. This article explores how the contours of drug regulation are likely to change after FDAAA, which is the most profound reworking of the U.S. drug regulatory framework in half a century. FDAAA envisions heavy use, during the period after drugs are approved, of evidence from large observational studies that rely on interoperable health data networks. Understanding what was wrong with FDA\u27s old evidentiary paradigm, which dates back to 1962, is essential to understanding its new one. Parts II and III of this article discuss the evidentiary limitations of premarket drug trials;important aspects of modern legal doctrine rest on misconceptions about their evidentiary power. Part IV then explores how scientific advances flowing from the Human Genome Project over the past decade further undermined FDA\u27s old evidentiary paradigm. FDAAA was Congress\u27s response to these problems. Part V identifies seven pillars of the new evidentiary paradigm: seven novel propositions that reject foundational assumptions of twentieth-century drug regulation. Collapse of these assumptions sets off ripple effects in various doctrinal areas. Part VI provides two examples, with the aim of opening a scholarly debate about these and other impacts of FDA\u27s new evidentiary paradigm

    Seven Pillars of a New Evidentiary Paradigm: The Food, Drug, and Cosmetic Act Enters the Genomic Era

    Get PDF
    To assess the impact of the March 2009 decision in Wyeth v. Levine, it is crucial to understand that the Supreme Court ruled on actions that the U.S. Food and Drug Administration (FDA) took under a statutory scheme that already had been amended by the time the case was decided. The Food and Drug Administration Amendments Act of 2007 (FDAAA) transformed drug regulation, adding significant new powers to develop evidence and make new types of decisions in the postmarket period. This article explores how the contours of drug regulation are likely to change after FDAAA, which is the most profound reworking of the U.S. drug regulatory framework in half a century. FDAAA envisions heavy use, during the period after drugs are approved, of evidence from large observational studies that rely on interoperable health data networks. Understanding what was wrong with FDA\u27s old evidentiary paradigm, which dates back to 1962, is essential to understanding its new one. Parts II and III of this article discuss the evidentiary limitations of premarket drug trials;important aspects of modern legal doctrine rest on misconceptions about their evidentiary power. Part IV then explores how scientific advances flowing from the Human Genome Project over the past decade further undermined FDA\u27s old evidentiary paradigm. FDAAA was Congress\u27s response to these problems. Part V identifies seven pillars of the new evidentiary paradigm: seven novel propositions that reject foundational assumptions of twentieth-century drug regulation. Collapse of these assumptions sets off ripple effects in various doctrinal areas. Part VI provides two examples, with the aim of opening a scholarly debate about these and other impacts of FDA\u27s new evidentiary paradigm

    Risk-Taking and Rulemaking: Addressing Risk Compensation Behavior Through FDA Regulation of Prescription Drugs

    Get PDF
    Despite widespread acclaim for their potential to reduce public health harms, technological advances in health and safety frequently raise the ominous specter of risk compensation behavior-the possibility that individuals protected by these technologies will increase their risk-taking on the belief that they are protected from harm. Risk compensation has been a rallying cry for opponents of new technologies such as the HPV vaccine, needle exchange programs for drug users, or prescription pills for the prevention of HIV infection. Although these concerns are frequently voiced in the language of morality and personal responsibility, it may be more productive to consider this phenomenon through the lens of behavioral science, with an emphasis on respecting individuals\u27 behavioral preferences. This Article aims to present the theoretical basis for risk compensation behavior, to categorize diferent types of risk compensation effects, to enumerate ways in which the law may address these effects, and to illustrate an application of these legal strategies to FDA regulation of prescription drugs. Throughout, this Article reframes risk compensation behavior as a presumptively rational mechanism for value conversion, by which the protective value of a health or safety technology is transformed into another type of value that may better satisfy individual preferences. But where imperfect information or negative externalities lead to harm, there may be a role for a regulatory response

    Medical Product Information Incentives and the Transparency Paradox

    Get PDF
    Recent allegations that essential safety and efficacy information is often suppressed by medical product manufacturers or poorly evaluated by regulators have led to calls for greater information transparency. The public is justifiably concerned that its ability to conduct an informed risk-benefit assessment of drugs and medical devices is compromised. Several changes have already been made to federal regulatory law and medical research policy to mandate greater disclosure and more changes are being considered. However, it is possible that these measures may backfire by enhancing significant tort-based economic disincentives for generating new information.I n other words, greater disclosure requirements could, paradoxically, lead to less information production. The resulting shortfall could be extremely dangerous and have a detrimental effect on health care for years to come. This Article addresses the crisis on the horizon and proposes a unique solution that connects tort law disincentives to information production incentives. It explains why an economically rational company would be expected to respond to transparency with less information and proposes a tort liability limitation as a solution that will encourage a cost-internalizing company to increase information production. This Article also considers the impact of the FDA\u27s recent position on preemption along with other regulatory enhancements and concludes that these are effective, but second-best solutions
    corecore