48 research outputs found

    Detection of Fake News Using Machine Learning

    Get PDF
    For some past recent years, largely since people started obtaining quick access to social media, fake news have became a serious downside and are spreading a lot of and quicker than the true news. As incontestable by the widespread effects of the big onset of fake news, humans are incapable of detecting whether the news is genuine or fake. With this, efforts have been made to research the method of fake news detection. The most popular and well-liked of such efforts is “blacklists” of sources and authors that don't seem to be trustworthy. Whereas these tools area helpful, so as to form a more complete end to end resolution, we also account for tougher cases wherever reliable sources and authors unharnessed false news. The motive of this project is to form a tool for investigation the language patterns that characterize wrong and right news through machine learning. The results of this project represent the flexibility for machine learning to be helpful during this task. We have made a model that detects several instinctive indicator of right and wrong news

    Detection of Fake News Using Machine Learning

    Get PDF
    For some past recent years, largely since people started obtaining quick access to social media, fake news have became a serious downside and are spreading a lot of and quicker than the true news. As incontestable by the widespread effects of the big onset of fake news, humans are incapable of detecting whether the news is genuine or fake. With this, efforts have been made to research the method of fake news detection. The most popular and well-liked of such efforts is “blacklists” of sources and authors that don't seem to be trustworthy. Whereas these tools area helpful, so as to form a more complete end to end resolution, we also account for tougher cases wherever reliable sources and authors unharnessed false news. The motive of this project is to form a tool for investigation the language patterns that characterize wrong and right news through machine learning. The results of this project represent the flexibility for machine learning to be helpful during this task. We have made a model that detects several instinctive indicator of right and wrong news

    Finger-tip injuries: a study on functional outcomes of various methods of treatment

    Get PDF
    Background: Fingertip injuries are the most common injuries of the hand. Although maintenance of length, preservation of the nail, and appearance are important, the primary goal of treatment is a painless fingertip with durable and sensate skin. Restoration of original form or reconstruction of the most comfortable and functional compromise is the substance of challenge assured by the surgeon who manages the injured fingertip.Methods: This descriptive study was done to evaluate the outcomes of various management (conservative, primary closure, SSG and various flaps) of fingertip injuries of 180 patients from December 2014 to 2016.Results: Out of 180 patients, 30 dropped out, 76% were males and 24% were females. 68% were children and labor class. Index finger was involved in 55% cases. 42% injuries were due to machine injuries and door entrapment. Conservative and cross finger flap has better outcomes.Conclusions: This is a preliminary report of 150 cases of the fingertip injuries with the problem of tissue loss. Most patients were injured while working. Majority of trauma was caused by various machines. Various methods had been chosen depends on type of injuries, age and occupation

    Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning

    Full text link
    Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems. However, in practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment. Moreover, robotic policies learned with RL often fail when deployed beyond the carefully controlled setting in which they were learned. In this work, we study how these challenges can all be tackled by effective utilization of diverse offline datasets collected from previously seen tasks. When faced with a new task, our system adapts previously learned skills to quickly learn to both perform the new task and return the environment to an initial state, effectively performing its own environment reset. Our empirical results demonstrate that incorporating prior data into robotic reinforcement learning enables autonomous learning, substantially improves sample-efficiency of learning, and enables better generalization. Project website: https://sites.google.com/view/ariel-berkeley/Comment: 17 pages, project website at https://sites.google.com/view/ariel-berkeley

    Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

    Full text link
    Progress in deep learning highlights the tremendous potential of utilizing diverse robotic datasets for attaining effective generalization and makes it enticing to consider leveraging broad datasets for attaining robust generalization in robotic learning as well. However, in practice, we often want to learn a new skill in a new environment that is unlikely to be contained in the prior data. Therefore we ask: how can we leverage existing diverse offline datasets in combination with small amounts of task-specific data to solve new tasks, while still enjoying the generalization benefits of training on large amounts of data? In this paper, we demonstrate that end-to-end offline RL can be an effective approach for doing this, without the need for any representation learning or vision-based pre-training. We present pre-training for robots (PTR), a framework based on offline RL that attempts to effectively learn new tasks by combining pre-training on existing robotic datasets with rapid fine-tuning on a new task, with as few as 10 demonstrations. PTR utilizes an existing offline RL method, conservative Q-learning (CQL), but extends it to include several crucial design decisions that enable PTR to actually work and outperform a variety of prior methods. To our knowledge, PTR is the first RL method that succeeds at learning new tasks in a new domain on a real WidowX robot with as few as 10 task demonstrations, by effectively leveraging an existing dataset of diverse multi-task robot data collected in a variety of toy kitchens. We also demonstrate that PTR can enable effective autonomous fine-tuning and improvement in a handful of trials, without needing any demonstrations. An accompanying overview video can be found in the supplementary material and at thi URL: https://sites.google.com/view/ptr-final

    Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning

    Full text link
    A compelling use case of offline reinforcement learning (RL) is to obtain a policy initialization from existing datasets followed by fast online fine-tuning with limited interaction. However, existing offline RL methods tend to behave poorly during fine-tuning. In this paper, we study the fine-tuning problem in the context of conservative offline RL methods and we devise an approach for learning an effective initialization from offline data that also enables fast online fine-tuning capabilities. Our approach, calibrated Q-learning (Cal-QL), accomplishes this by learning a conservative value function initialization that underestimates the value of the learned policy from offline data, while also ensuring that the learned Q-values are at a reasonable scale. We refer to this property as calibration, and define it formally as providing a lower bound on the true value function of the learned policy and an upper bound on the value of some other (suboptimal) reference policy, which may simply be the behavior policy. We show that a conservative offline RL algorithm that also learns a calibrated value function leads to effective online fine-tuning, enabling us to take the benefits of offline initializations in online fine-tuning. In practice, Cal-QL can be implemented on top of the conservative Q learning (CQL) for offline RL within a one-line code change. Empirically, Cal-QL outperforms state-of-the-art methods on 9/11 fine-tuning benchmark tasks that we study in this paper. Code and video are available at https://nakamotoo.github.io/projects/Cal-QLComment: project page: https://nakamotoo.github.io/projects/Cal-Q

    Robotic Offline RL from Internet Videos via Value-Function Pre-Training

    Full text link
    Pre-training on Internet data has proven to be a key ingredient for broad generalization in many modern ML systems. What would it take to enable such capabilities in robotic reinforcement learning (RL)? Offline RL methods, which learn from datasets of robot experience, offer one way to leverage prior data into the robotic learning pipeline. However, these methods have a "type mismatch" with video data (such as Ego4D), the largest prior datasets available for robotics, since video offers observation-only experience without the action or reward annotations needed for RL methods. In this paper, we develop a system for leveraging large-scale human video datasets in robotic offline RL, based entirely on learning value functions via temporal-difference learning. We show that value learning on video datasets learns representations that are more conducive to downstream robotic offline RL than other approaches for learning from video data. Our system, called V-PTR, combines the benefits of pre-training on video data with robotic offline RL approaches that train on diverse robot data, resulting in value functions and policies for manipulation tasks that perform better, act robustly, and generalize broadly. On several manipulation tasks on a real WidowX robot, our framework produces policies that greatly improve over prior methods. Our video and additional details can be found at https://dibyaghosh.com/vptr/Comment: First three authors contributed equall

    Results and adverse events of personalized peptide receptor radionuclide therapy with 90Yttrium and 177Lutetium in 1048 patients with neuroendocrine neoplasms

    Get PDF
    Peptide receptor radionuclide therapy (PRRT) of patients with somatostatin receptor expressing neuroendocrine neoplasms has shown promising results in clinical trials and a recently published phase III study.In our center, 2294 patients were screened between 2004 and 2014 by 68Ga somatostatin receptor (SSTR) PET/CT. Intention to treat analysis included 1048 patients, who received at least one cycle of 90Yttrium or 177Lutetium-based PRRT. Progression free survival was determined by 68Ga SSTR-PET/CT and EORTC response criteria. Adverse events were determined by CTCAE criteria.Overall survival (95% confidence interval) of all patients was 51 months (47.0-54.9) and differed significantly according to radionuclide, grading, previous therapies, primary site and functionality. Progression free survival (based on PET/CT) of all patients was 19 months (16.9-21), which was significantly influenced by radionuclide, grading, and origin of neuroendocrine neoplasm. Progression free survival after initial progression and first and second resumption of PRRT after therapy-free intervals of more than 6 months were 11 months (9.4-12.5) and 8 months (6.4-9.5), respectively. Myelodysplastic syndrome or leukemia developed in 22 patients (2.1%) and 5 patients required hemodialysis after treatment, other adverse events were rare.PRRT is effective and overall survival is favorable in patients with neuroendocrine neoplasms depending on the radionuclide used for therapy, grading and origin of the neuroendocrine neoplasm which is not exactly mirrored in progression free survival as determined by highly sensitive 68Ga somatostatin receptor PET/CT using EORTC criteria for determining response to therapy
    corecore