157 research outputs found

    A framework for assessing the condition of crowds exposed to a fire hazard using a probabilistic model

    Get PDF
    Published version of an article in the journal: International Journal of Machine Learning and Computing. Also available from the publisher at: http://dx.doi.org/10.7763/IJMLC.2014.V4.379 open AccessAllocating limited resources in an optimal manner when rescuing victims from a hazard is a complex and error prone task, because the involved hazards are typically evolving over time; stagnating, building up or diminishing. Typical error sources are: miscalculation of resource availability and the victims’ condition. Thus, there is a need for decision support when it comes to rapidly predicting where the human fatalities are likely to occur to ensure timely rescue. This paper proposes a probabilistic model for tracking the condition of victims when exposed to fire hazards, using a Bayesian Network. The model is extracted from safety literature on human physiological and psychological responses against heat, thermal radiation and smoke. We simulate the state of victims under different fire scenarios and observe the likelihood of fatalities due to fire exposure. We show how our probabilistic approach can serve as the basis for improved decision support, providing real-time hazard and health assessments to the decision makers

    Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the Goore Game

    Get PDF
    The two-armed bandit problem is a classical optimization problem where a decision maker sequentially pulls one of two arms attached to a gambling machine, with each pull resulting in a random reward. The reward distributions are unknown, and thus, one must balance between exploiting existing knowledge about the arms, and obtaining new information. Bandit problems are particularly fascinating because a large class of real world problems, including routing, Quality of Service (QoS) control, game playing, and resource allocation, can be solved in a decentralized manner when modeled as a system of interacting gambling machines. Although computationally intractable in many cases, Bayesian methods provide a standard for optimal decision making. This paper proposes a novel scheme for decentralized decision making based on the Goore Game in which each decision maker is inherently Bayesian in nature, yet avoids computational intractability by relying simply on updating the hyper parameters of sibling conjugate priors, and on random sampling from these posteriors. We further report theoretical results on the variance of the random rewards experienced by each individual decision maker. Based on these theoretical results, each decision maker is able to accelerate its own learning by taking advantage of the increasingly more reliable feedback that is obtained as exploration gradually turns into exploitation in bandit problem based learning. Extensive experiments, involving QoS control in simulated wireless sensor networks, demonstrate that the accelerated learning allows us to combine the benefits of conservative learning, which is high accuracy, with the benefits of hurried learning, which is fast convergence. In this manner, our scheme outperforms recently proposed Goore Game solution schemes, where one has to trade off accuracy with speed. As an additional benefit, performance also becomes more stable. We thus believe that our methodology opens avenues for improved performance in a number of applications of bandit based decentralized decision making

    Explainable Tsetlin Machine Framework for Fake News Detection with Credibility Score Assessment

    Get PDF
    The proliferation of fake news, i.e., news intentionally spread for misinformation, poses a threat to individuals and society. Despite various fact-checking websites such as PolitiFact, robust detection techniques are required to deal with the increase in fake news. Several deep learning models show promising results for fake news classification, however, their black-box nature makes it difficult to explain their classification decisions and quality-assure the models. We here address this problem by proposing a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM). In brief, we utilize the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text. Further, we use clause ensembles to calculate the credibility of fake news. For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least 5% in terms of accuracy, with the added benefit of an interpretable logic-based representation. In addition, our approach provides a higher F1-score than BERT and XLNet, however, we obtain slightly lower accuracy. We finally present a case study on our model’s explainability, demonstrating how it decomposes into meaningful words and their negations.publishedVersio
    • …
    corecore