310 research outputs found
New Jersey Political Briefing
Memorandum detailing New Jersey political figures and landscape for Ferraro campaign.https://ir.lawnet.fordham.edu/vice_presidential_campaign_materials_1984/1060/thumbnail.jp
Memorandum: New Jersey Political Briefing
Details New Jersey political scene and figures.https://ir.lawnet.fordham.edu/vice_presidential_campaign_materials_1984/1053/thumbnail.jp
How Liability Insurers Protect Patients and Improve Safety
Forty years after the publication of the first systematic study of adverse medical events, there is greater access to information about adverse medical events and increasingly widespread acceptance of the view that patient safety requires more than vigilance by well-intentioned medical professionals. In this essay, we describe some of the ways that medical liability insurance organizations contributed to this transformation, and we catalog the roles that those organizations play in promoting patient safety today. Whether liability insurance in fact discourages providers from improving safety or encourages them to protect patients from avoidable harms is an empirical question that a survey like this one cannot resolve. But, as we show, insurers make serious efforts to reduce their losses by encouraging and helping health care providers to do better in at least six ways. (1) Insurers identify subpar providers in ways that provide the opportunity for other institutions to act. (2) Insurers provide incentives for providers by charging premiums that are based on risk and by refusing to insure providers who are too high risk. (3) Insurers accumulate data for root cause analysis. (4) Insurers conduct loss prevention inspections of medical facilities. (5) Insurers educate providers about legal oversight and steps that they can take to manage their risks. (6) Finally, insurers provide financial and human capital support to patient safety organizations
Embodied Active Learning of Relational State Abstractions for Bilevel Planning
State abstraction is an effective technique for planning in robotics
environments with continuous states and actions, long task horizons, and sparse
feedback. In object-oriented environments, predicates are a particularly useful
form of state abstraction because of their compatibility with symbolic planners
and their capacity for relational generalization. However, to plan with
predicates, the agent must be able to interpret them in continuous environment
states (i.e., ground the symbols). Manually programming predicate
interpretations can be difficult, so we would instead like to learn them from
data. We propose an embodied active learning paradigm where the agent learns
predicate interpretations through online interaction with an expert. For
example, after taking actions in a block stacking environment, the agent may
ask the expert: "Is On(block1, block2) true?" From this experience, the agent
learns to plan: it learns neural predicate interpretations, symbolic planning
operators, and neural samplers that can be used for bilevel planning. During
exploration, the agent plans to learn: it uses its current models to select
actions towards generating informative expert queries. We learn predicate
interpretations as ensembles of neural networks and use their entropy to
measure the informativeness of potential queries. We evaluate this approach in
three robotic environments and find that it consistently outperforms six
baselines while exhibiting sample efficiency in two key metrics: number of
environment interactions, and number of queries to the expert. Code:
https://tinyurl.com/active-predicatesComment: Conference on Lifelong Learning Agents (CoLLAs) 202
Few-Shot Bayesian Imitation Learning with Logical Program Policies
Humans can learn many novel tasks from a very small number (1--5) of
demonstrations, in stark contrast to the data requirements of nearly tabula
rasa deep learning methods. We propose an expressive class of policies, a
strong but general prior, and a learning algorithm that, together, can learn
interesting policies from very few examples. We represent policies as logical
combinations of programs drawn from a domain-specific language (DSL), define a
prior over policies with a probabilistic grammar, and derive an approximate
Bayesian inference algorithm to learn policies from demonstrations. In
experiments, we study five strategy games played on a 2D grid with one shared
DSL. After a few demonstrations of each game, the inferred policies generalize
to new game instances that differ substantially from the demonstrations. Our
policy learning is 20--1,000x more data efficient than convolutional and fully
convolutional policy learning and many orders of magnitude more computationally
efficient than vanilla program induction. We argue that the proposed method is
an apt choice for tasks that have scarce training data and feature significant,
structured variation between task instances.Comment: AAAI 202
The Roles of Symbols in Neural-based AI: They are Not What You Think!
We propose that symbols are first and foremost external communication tools
used between intelligent agents that allow knowledge to be transferred in a
more efficient and effective manner than having to experience the world
directly. But, they are also used internally within an agent through a form of
self-communication to help formulate, describe and justify subsymbolic patterns
of neural activity that truly implement thinking. Symbols, and our languages
that make use of them, not only allow us to explain our thinking to others and
ourselves, but also provide beneficial constraints (inductive bias) on learning
about the world. In this paper we present relevant insights from neuroscience
and cognitive science, about how the human brain represents symbols and the
concepts they refer to, and how today's artificial neural networks can do the
same. We then present a novel neuro-symbolic hypothesis and a plausible
architecture for intelligent agents that combines subsymbolic representations
for symbols and concepts for learning and reasoning. Our hypothesis and
associated architecture imply that symbols will remain critical to the future
of intelligent systems NOT because they are the fundamental building blocks of
thought, but because they are characterizations of subsymbolic processes that
constitute thought.Comment: 28 page
- …