31 research outputs found

    Identifying unreliable sensors without a knowledge of the ground truth in deceptive environments

    Get PDF
    This paper deals with the extremely fascinating area of “fusing” the outputs of sensors without any knowledge of the ground truth. In an earlier paper, the present authors had recently pioneered a solution, by mapping it onto the fascinating paradox of trying to identify stochastic liars without any additional information about the truth. Even though that work was significant, it was constrained by the model in which we are living in a world where “the truth prevails over lying”. Couched in the terminology of Learning Automata (LA), this corresponds to the Environment (Since the Environment is treated as an entity in its own right, we choose to capitalize it, rather than refer to it as an “environment”, i.e., as an abstract concept.) being “Stochastically Informative”. However, as explained in the paper, solving the problem under the condition that the Environment is “Stochastically Decepti”, as opposed to informative, is far from trivial. In this paper, we provide a solution to the problem where the Environment is deceptive (We are not aware of any other solution to this problem (within this setting), and so we believe that our solution is both pioneering and novel.), i.e., when we are living in a world where “lying prevails over the truth”

    Disability, ICT and eLearning Platforms: Faculty-Facing Embedded Work Tools in Learning Management Systems

    Get PDF
    This paper contributes to the current discussion in the field of human-computer interaction design (HCI) on the accessibility and design of eLearning tools embedded in the online platforms for higher education. Presenting the preliminary results of a longitudinal study of the accessibility of the faculty-facing pages of Canvas learning management system, it aims at drawing the attention of designers, developers, and manufacturers to the barriers erected by the ableist LMS designs for disabled faculty. The paper asks for improvements in design processes by embracing participatory design methods and by paying attention to the recommendations included in this paper

    Impact of COVID-19 on cardiovascular testing in the United States versus the rest of the world

    Get PDF
    Objectives: This study sought to quantify and compare the decline in volumes of cardiovascular procedures between the United States and non-US institutions during the early phase of the coronavirus disease-2019 (COVID-19) pandemic. Background: The COVID-19 pandemic has disrupted the care of many non-COVID-19 illnesses. Reductions in diagnostic cardiovascular testing around the world have led to concerns over the implications of reduced testing for cardiovascular disease (CVD) morbidity and mortality. Methods: Data were submitted to the INCAPS-COVID (International Atomic Energy Agency Non-Invasive Cardiology Protocols Study of COVID-19), a multinational registry comprising 909 institutions in 108 countries (including 155 facilities in 40 U.S. states), assessing the impact of the COVID-19 pandemic on volumes of diagnostic cardiovascular procedures. Data were obtained for April 2020 and compared with volumes of baseline procedures from March 2019. We compared laboratory characteristics, practices, and procedure volumes between U.S. and non-U.S. facilities and between U.S. geographic regions and identified factors associated with volume reduction in the United States. Results: Reductions in the volumes of procedures in the United States were similar to those in non-U.S. facilities (68% vs. 63%, respectively; p = 0.237), although U.S. facilities reported greater reductions in invasive coronary angiography (69% vs. 53%, respectively; p < 0.001). Significantly more U.S. facilities reported increased use of telehealth and patient screening measures than non-U.S. facilities, such as temperature checks, symptom screenings, and COVID-19 testing. Reductions in volumes of procedures differed between U.S. regions, with larger declines observed in the Northeast (76%) and Midwest (74%) than in the South (62%) and West (44%). Prevalence of COVID-19, staff redeployments, outpatient centers, and urban centers were associated with greater reductions in volume in U.S. facilities in a multivariable analysis. Conclusions: We observed marked reductions in U.S. cardiovascular testing in the early phase of the pandemic and significant variability between U.S. regions. The association between reductions of volumes and COVID-19 prevalence in the United States highlighted the need for proactive efforts to maintain access to cardiovascular testing in areas most affected by outbreaks of COVID-19 infection

    On distinguishing between reliable and unreliable sensors without a knowledge of the ground truth

    No full text
    In many applications, data from different sensors are aggregated in order to obtain more reliable information about the process that the sensors are monitoring. However, the quality of the aggregated information is intricately dependent on the reliability of the individual sensors. In fact, unreliable sensors will tend to report erroneous values of the ground truth, and thus degrade the quality of the fused information. Finding strategies to identify unreliable sensors can assist in having a counter-effect on their respective detrimental influences on the fusion process, and this has has been a focal concern in the literature. The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying which sensors are unreliable without any knowledge of the ground truth. This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counter-intuitive in and of itself. To the best of our knowledge, this is the first reported solution to the aforementioned paradox. Legacy work and the reported literature have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. The informed reader will observe that the so-called Weighted Majority Algorithm is a representative example of a large class of such legacy algorithms. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth - as advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of Learning Automata (LA) so as to gradually learn the identi

    On Solving the Problem of Identifying Unreliable Sensors Without a Knowledge of the Ground Truth: The Case of Stochastic Environments

    No full text
    The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying unreliable sensors (in a domain of reliable and unreliable ones) without any knowledge of the ground truth. This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counter-intuitive in and of itself. One aspect of our contribution is to show how redundancy can be introduced, and how it can be effectively utilized in resolving this paradox. Legacy work and the reported literature (for example, in the so-called weighted majority algorithm) have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. Unfortunately, the fundamental assumption of revealing the ground truth cannot be always guaranteed (or even expected) in many real life scenarios. While some extensions of the Condorcet jury theorem [9] can lead to a probabilistic guarantee on the quality of the fused process, they do not provide a solution to the unreliable sensor identification problem. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth- A s advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of learning automata (LA) so as to gradually learn the identity of the reliable and unreliable sensors. To achieve this, we resort to a team of LA, where a distinct automaton is associated with each sensor. The solution provided here has been subjected to rigorous experimental tests, and the results presented are, in our opinion, both novel and conclusive

    The effect of barusiban, a selective oxytocin antagonist, in threatened preterm labor at late gestational age: a randomized, double-blind, placebo-controlled trial

    No full text
    OBJECTIVE: The objective of the study was to compare barusiban with placebo in threatened preterm labor. STUDY DESIGN: This was a randomized, double-blind, placebo-controlled, multicenter study. One hundred sixty-three women at 34-35 weeks plus 6 days, and with 6 or more contractions of 30 seconds duration during 30 minutes, cervical length 15 mm or less, and cervical dilatation > 1 and < 4 cm were randomized to a single intravenous bolus of barusiban (0.3, 1, 3, or 10 mg) or placebo. The primary endpoint was percentage of women who did not deliver within 48 hours. RESULTS: None of the barusiban doses reduced the number of uterine contractions compared with placebo. There was no significant difference in the percentage of women who did not deliver within 48 hours (72% placebo and 65-88% barusiban groups; P = .21-.84). Barusiban was not associated with an adverse safety profile in the woman, fetus, neonate, or infant. CONCLUSION: An intravenous bolus of barusiban was no more effective than placebo in stopping preterm labor in pregnant women at late gestational age

    A novel strategy for solving the stochastic point location problem using a hierarchical searching scheme

    Get PDF
    Stochastic point location (SPL) deals with the problem of a learning mechanism (LM) determining the optimal point on the line when the only input it receives are stochastic signals about the direction in which it should move. One can differentiate the SPL from the traditional class of optimization problems by the fact that the former considers the case where the directional information, for example, as inferred from an Oracle (which possibly computes the derivatives), suffices to achieve the optimization-without actually explicitly computing any derivatives. The SPL can be described in terms of a LM (algorithm) attempting to locate a point on a line. The LM interacts with a random environment which essentially informs it, possibly erroneously, if the unknown parameter is on the left or the right of a given point. Given a current estimate of the optimal solution, all the reported solutions to this problem effectively move along the line to yield updated estimates which are in the neighborhood of the current solution.1 This paper proposes a dramatically distinct strategy, namely, that of partitioning the line in a hierarchical
    corecore