480 research outputs found
The Defend Trade Secrets Act Whistleblower Immunity Provision: A Legislative History
The Defend Trade Secrets Act of 2016 ( DTSA ) was the product of a multi-year effort to federalize trade secret protection. In the final stages of drafting the DTSA, Senators Grassley and Leahy introduced an important new element: immunity for whistleblowers who share confidential information in the course of reporting suspected illegal activity to law enforecement or when filing a lawsuit, provided they do so under seal. The meaning and scope of this provision are of vital importance to enforcing health, safety, civil rights, financial market, consumer, and environmental protections and deterring fraud against the government, shareholders, and the public. This article explains how the whistleblower immunity provision was formulated and offers insights into its proper interpretation
Active Inverse Reward Design
Designers of AI agents often iterate on the reward function in a
trial-and-error process until they get the desired behavior, but this only
guarantees good behavior in the training environment. We propose structuring
this process as a series of queries asking the user to compare between
different reward functions. Thus we can actively select queries for maximum
informativeness about the true reward. In contrast to approaches asking the
designer for optimal behavior, this allows us to gather additional information
by eliciting preferences between suboptimal behaviors. After each query, we
need to update the posterior over the true reward function from observing the
proxy reward function chosen by the designer. The recently proposed Inverse
Reward Design (IRD) enables this. Our approach substantially outperforms IRD in
test environments. In particular, it can query the designer about
interpretable, linear reward functions and still infer non-linear ones
The Assistive Multi-Armed Bandit
Learning preferences implicit in the choices humans make is a well studied
problem in both economics and computer science. However, most work makes the
assumption that humans are acting (noisily) optimally with respect to their
preferences. Such approaches can fail when people are themselves learning about
what they want. In this work, we introduce the assistive multi-armed bandit,
where a robot assists a human playing a bandit task to maximize cumulative
reward. In this problem, the human does not know the reward function but can
learn it through the rewards received from arm pulls; the robot only observes
which arms the human pulls but not the reward associated with each pull. We
offer sufficient and necessary conditions for successfully assisting the human
in this framework. Surprisingly, better human performance in isolation does not
necessarily lead to better performance when assisted by the robot: a human
policy can do better by effectively communicating its observed rewards to the
robot. We conduct proof-of-concept experiments that support these results. We
see this work as contributing towards a theory behind algorithms for
human-robot interaction.Comment: Accepted to HRI 201
Misconstruing Whistleblower Immunity Under the Defend Trade Secrets Act
In crafting the Defend Trade Secrets Act of 2016 (DTSA), Congress went beyond the federalization of state trade secret protection to tackle a broader social justice problem: the misuse of nondisclosure agreements (NDAs) to discourage reporting of illegal activity in a variety of areas. The past few decades have witnessed devastating government contracting abuses, regulatory violations, and deceptive financial schemes that have hurt the public and cost taxpayers and investors billions of dollars. Congress recognized that immunizing whistleblowers from the cost and risk of trade secret liability for providing information to the Government could spur law enforcement. But could this goal be accomplished without jeopardizing legitimate trade secret protection
- …