1,900 research outputs found
Recommended from our members
Grounding Innovation: How Ex-Ante Prohibitions and Ex-Post Allowances Impede Commercial Drone Use
Unmanned aerial vehiclesââUAVsâ or âdronesââare increasingly becoming a mainstream commercial phenomenon and tool for a vast range of commercial consumer, prosumer, and professional activities. Given advances in automation and miniaturization generallyâand flight control stability and autopilot systems specificallyâanyone can now fly in any airspace at any time by operating hand-held fixed-wing aircraft or quadcopters with little more than an ordinary smartphone or tablet. As such, sales of store-bought drones number in the millions, corresponding to the wide range of civil applications and value propositions that UAVs offer.
Though civil drones are an attractive business investment, substantial regulatory headwinds confront the drone industry as startups endeavor to get to market and scale quickly. This is so notwithstandingâor perhaps even because ofâthe celebrated abilities of most small UAVs to fly boundlessly and collect and record information from nearly any vantage point. Drones are a classically disruptive technology of social, economic, and legal norms. Their operations raise novel and valid concerns in many of these areas, particularly in terms of safety and privacy. Consequently, regulators have respondedâand they should. But federal, state, and local lawmakers alike have responded with policy interventions that are too often premature (or untimely) and overly rigid, discouraging the many beneficial uses of UAV technology. In fact, on the basis of ephemeral fears rather than data, regulators initially put in place overbroad and permission-based restraints that were tantamount to a de facto ban on all drone operations.
This Article critiques the underlying thinking and approach that federal regulators have taken with respect to civil drones and argues that commercial UAVs should be a âpermissionless innovation.â This Article posits that a better alternative to a top-down, ex-anteregulatory scheme is to broadly allow commercial UAVs and to deal with careless or reckless or nefarious operators and operations on a case-by-case, ex-postbasis. In doing so, this Article aims to present lessons learned in the context of commercial UAVs so that inefficiencies and paternalistic rulemaking can be avoided in the regulation of other innovations associated with the Internet of Things, including urban air mobility and electric vertical-takeoff-and-landing technologiesâotherwise known as flying carsâthat are just around the corner
Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph
Police SWAT teams and Military Special Forces face mounting pressure and
challenges from adversaries that can only be resolved by way of ever more
sophisticated inputs into tactical operations. Lethal Autonomy provides
constrained military/security forces with a viable option, but only if
implementation has got proper empirically supported foundations. Autonomous
weapon systems can be designed and developed to conduct ground, air and naval
operations. This monograph offers some insights into the challenges of
developing legal, reliable and ethical forms of autonomous weapons, that
address the gap between Police or Law Enforcement and Military operations that
is growing exponentially small. National adversaries are today in many
instances hybrid threats, that manifest criminal and military traits, these
often require deployment of hybrid-capability autonomous weapons imbued with
the capability to taken on both Military and/or Security objectives. The
Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of
Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that
required military response and police investigations against a fighting cell of
the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade
âThe Only Game in Townâ â But is it a Legal One? American Drone Strikes and International Law
In 2002 a US Predator drone operating above Afghanistanâs Paktia province spotted three men in Zhawar Kili, a complex slightly north of the infamous Tora Bora cave system, an area used by al-Qaeda leadership to train and regroup. One of the men was tall; supposedly the others were acting reverently towards him. Convinced the tall man was Osama bin Laden a Hellfire missile was fired from the Predator, killing all three men instantly. The tall man was not bin Laden. None of the men were even affiliated with al-Qaeda or the Taliban; they were simply civilians in the wrong place at the wrong time. This strike and many others that are all too similar raise a multitude of questions, both legal and moral, regarding the US lethal drone strike programme. This article attempts to examine the legal implications of US drone strikes; not only in Afghanistan, but further afield from the more traditional and accepted battlefields in Pakistan, Yemen and Somalia
Autonomous Weapon Systems: A Brief Survey of Developmental, Operational, Legal, and Ethical Issues
What does the Department of Defense hope to gain from the use of autonomous weapon systems (AWS)? This Letort Paper explores a diverse set of complex issues related to the developmental, operational, legal, and ethical aspects of AWS. It explores the recent history of the development and integration of autonomous and semi-autonomous systems into traditional military operations. It examines anticipated expansion of these roles in the near future as well as outlines international efforts to provide a context for the use of the systems by the United States. As these topics are well-documented in many sources, this Paper serves as a primer for current and future AWS operations to provide senior policymakers, decisionmakers, military leaders, and their respective staffs an overall appreciation of existing capabilities and the challenges, opportunities, and risks associated with the use of AWS across the range of military operations. Emphasis is added to missions and systems that include the use of deadly force.https://press.armywarcollege.edu/monographs/1303/thumbnail.jp
Moving beyond privacy and airspace safety: Guidelines for just drones in policing
The use of drones offers police forces potential gains in efficiency and safety. However, their use may also harm public perception of the police if drones are refused. Therefore, police forces should consider the perception of bystanders and broader society to maximize drones' potential. This article examines the concerns expressed by members of the public during a field trial involving 52 test participants. Analysis of the group interviews suggests that their worries go beyond airspace safety and privacy, broadly dis-cussed in existing literature and regulations. The interpretation of the results indicates that the perceived justice of drone use is a significant factor in acceptance. Leveraging the concept of organizational justice and data collected, we propose a catalogue of guidelines for just operation of drones to supplement the existing policy. We present the organizational justice perspective as a framework to integrate the concerns of the public and bystanders into legal work. Finally, we discuss the relevance of justice for the legitimacy of the police's actions and provide implications for research and practice
Artificial Intelligence and civil liability
This study â commissioned by the Policy Department C at the
request of the Committee on Legal Affairs â analyses the notion of
AI-technologies and the applicable legal framework for civil liability.
It demonstrates how technology regulation should be technologyspecific, and presents a Risk Management Approach, where the
party who is best capable of controlling and managing a
technology-related risk is held strictly liable, as a single entry point
for litigation. It then applies such approach to four case-studies, to
elaborate recommendations
Group Agency and Artificial Intelligence
The aim of this exploratory paper is to discuss a sometimes recognized but still under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entitiesâ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical
- âŠ