1,529 research outputs found

    Developing U.S. Nuclear Weapons Policy and International Law: The Approach of the Obama Administration

    Get PDF
    Prior U.S. presidential administrations have developed and adhered to the nuclear weapons policy of nuclear deterrence. This policy was largely conditioned by the Cold War and the fact that the U.S. Cold War adversary was a major threat to U.S. security because of its nuclear capability. The policy of nuclear deterrence worked on the principle of mutually assured destruction. It appears to have had the effect of discouraging recourse to nuclear weapons as instruments of war. It has also been generally perceived as a position that has an uneasy relationship with conventional international law. Even before entering office, President Obama suggested the need for a new perspective in nuclear weapons control: regulation and possible abolition.1 It was therefore with much anticipation that public opinion awaited the Obama Nuclear Posture Review (NPR). However, the report did not quite measure up to the public’s expectations. For example, the Administration reaffirmed NATO obligations that require U.S. adherence to the policy of nuclear deterrence, which does not represent a significant change from past policy. Nevertheless, strategic developments in treaty commitments with both NATO allies and former Cold War opponents imply a closer approximation with international law standards regarding the threat and use of nuclear weapons. Therefore, while current U.S. policy generates an expectation regarding the threat or use and abolition of nuclear weapons, it still retains an element of nuclear deterrence in its strategic posture, which, as indicated, seems to be in tension with international law. U.S. security strategy straddles a delicate balance between unilateral action and action consistent with promoting and defending international law in the national interest

    Production of Innovations within Farmer–Researcher Associations Applying Transdisciplinary Research Principles

    Get PDF
    Small-scale farmers in sub-Saharan West Africa depend heavily on local resources and local knowledge. Science-based knowledge is likely to aid decision-making in complex situations. In this presentation, we highlight a FiBL-coordinated research partnership between three national producer organisations and national agriculture research bodies in Mali, Burkina Faso, and Benin. The partnership seeks to compare conventional, GMObased, and organic cotton systems as regards food security and climate change

    Application of active device authentication mechanisms in the human-machine interface of SCADA networks

    Get PDF
    Supervisory Control and Data Acquisition (SCADA) systems are a type of Industrial Con- trol System (ICS) that both monitor and control the critical infrastructure that delivers man- ufactured goods, water, and energy. These systems are responsible for supervising everything from natural gas valves to electric substations. For the past half century, SCADA and ICS networks have been proprietary, closed systems, entirely contained within a private network. Their security was derived from air gap networking, physically isolating these systems from the Internet. However, system operators are increasingly opting to connect their control systems to Internet or corporate intranet networks in order to substantially reduce operating costs and improve reporting capabilities. This architecture change has given rise to a new and poorly understood class of risk. In this work, we examine how a security concept known as Active Device Authentication can be applied to the SCADA system threat model. As our contribution, we develop a software tool known as Gatekeeper that wraps Active Device Authentication capabilities around exist- ing, weaker authentication mechanisms present in off-the-shelf HMI software written in Java. This work aims to provide the reader with a stronger understanding of the concept of Active Device Authentication, and how it can be deployed into legacy, proprietary, or mission-critical environments to enable additional security controls without risk of impacting the underlying systems’ reliability

    The Prom Problem: Fair and Privacy-Enhanced Matchmaking with Identity Linked Wishes

    Get PDF
    In the Prom Problem (TPP), Alice wishes to attend a school dance with Bob and needs a risk-free, privacy preserving way to find out whether Bob shares that same wish. If not, no one should know that she inquired about it, not even Bob. TPP represents a special class of matchmaking challenges, augmenting the properties of privacy-enhanced matchmaking, further requiring fairness and support for identity linked wishes (ILW) – wishes involving specific identities that are only valid if all involved parties have those same wishes. The Horne-Nair (HN) protocol was proposed as a solution to TPP along with a sample pseudo-code embodiment leveraging an untrusted matchmaker. Neither identities nor pseudo-identities are included in any messages or stored in the matchmaker’s database. Privacy relevant data stay within user control. A security analysis and proof-of-concept implementation validated the approach, fairness was quantified, and a feasibility analysis demonstrated practicality in real-world networks and systems, thereby bounding risk prior to incurring the full costs of development. The SecretMatch™ Prom app leverages one embodiment of the patented HN protocol to achieve privacy-enhanced and fair matchmaking with ILW. The endeavor led to practical lessons learned and recommendations for privacy engineering in an era of rapidly evolving privacy legislation. Next steps include design of SecretMatch™ apps for contexts like voting negotiations in legislative bodies and executive recruiting. The roadmap toward a quantum resistant SecretMatch™ began with design of a Hybrid Post-Quantum Horne-Nair (HPQHN) protocol. Future directions include enhancements to HPQHN, a fully Post Quantum HN protocol, and more

    NATO Code of Best Practice for C2 Assessment

    Get PDF
    This major revision to the Code of Best Practice (COBP) for C2 Assessment is the product of a NATO Research and Technology Organisation (RTO) sponsored Research Group (SAS-026). It represents over a decade of work by many of the best analysts from the NATO countries. A symposium (SAS-039) was hosted by the NATO Consultation Command Control Agency (NC3A) that provided the venue for a rigorous peer review of the code. This new version of the COBP for C2 assessment builds upon the initial version of the COBP produced by SAS-002. The earlier version focused on the analysis of ground forces at a tactical echelon in mid- to high-intensity conflicts. In developing this new version of the COBP, SAS-026 focused on a changed geopolitical context characterized by a shift from preoccupation with a war involving NATO and the Warsaw Pact to concern for a broad range of smaller military conflicts and Operations Other Than War (OOTW). This version also takes into account the impact of significantly improved information-related capabilities and their implications for reducing the fog and friction traditionally associated with conflict. Significantly reduced levels of fog and friction offer an opportunity for the military to develop new concepts of operations, new organizational forms, and new approaches to C2, as well as to the processes that support it. In addition, SAS-026 was cognizant that NATO operations are likely to include coalitions of the willing that might involve Partnership for Peace (PfP) nations, other partners outside of NATO, international organizations, and NGOs. Cost analyses continue to be excluded because they differ among NATO members, so no single approach would be appropriate. Advances in technology are expected to continue at an increasing rate and spur both sustaining and disruptive innovation in military organizations. It is to be expected that this COBP will need to be periodically revisited in light of these developments.https://digitalcommons.odu.edu/msve_books/1012/thumbnail.jp

    The Making of Asia’s First Bilateral FTA : Origins and Regional Implications of the Japan–Singapore Economic Partnership Agreement

    Get PDF
    Japanese Prime Minister Junichiro Koizumi ushered in a new era in Japans international trade policy in January 2002 when he and his Singaporean counterpart, Goh Chok Tong, signed the Japan-Singapore Economic Partnership Agreement (JSEPA), the first bilateral Free Trade Agreement (FTA) signed between Asian countries. This trade strategy also reflected Japans interest in launching its so-called multi-layered trade policy which meant the pursuit of bilateral and regional trading arrangements, including FTAs, in an attempt to complement multilateralism based on the GATT/WTO to reinvigorate efforts to achieve global trade liberalisation. This paper aims to examine how and why Japan and Singapore decided to pursue FTAs, what interests both perceived in their pursuit of FTAs, what elements contributed to both countries being linked in this trade policy arrangement, and what implications the JSEPA has had for the FTA movement in East Asia. It argues that the JSEPA was made possible mainly through Singapores initial offer to exclude agricultural products from tariff elimination. But Japan faced problems in seeking FTAs with other ASEAN countries which were less developed than Singapore and had a higher proportion of agricultural exports, as the exclusion of specific agricultural products, such as rice and sugar, would contradict Japans claim that its FTAs would bolster the WTO-based multilateral system. The proliferation of FTAs in East Asia may generate a spaghetti-bowl effect with varying rules of origin that may divert and distort trade, but the new age aspects of the Japan-Singapore agreement will also have some positive economic effects. Although the preferential trade elements of the agreement are detrimental, the smaller portion of tariff elimination results in a smaller trade diversion effect on trading partners. Therefore, the Japan-Singapore agreement carries symbolic meaning in terms of trade policy debates as well as signifying a paradigm shift in Japans international trade policy.FTA, Japan, Singapore

    Full Volume 13, Issue 1

    Get PDF

    Anonymization and Risk

    Get PDF
    Perfect anonymization of data sets that contain personal information has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the best way to move data release policy past the alleged failures of anonymization is to focus on the process of minimizing risk of reidentification and sensitive attribute disclosure, not preventing harm. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk

    StateSim: Lessons Learned from 20 Years of A Country Modeling and Simulation Toolset

    Get PDF
    A holy grail for military, diplomatic, and intelligence analysis is a valid set of software agent models that act as the desired ethno-political factions so that one can test the effects of alternative courses of action in different countries. This article explains StateSim, a country modeling approach that synthesizes best-of-breed theories from across the social sciences and that has helped numerous organizations over 20 years to study insurgents, gray zone actors, and other societal instabilities. The country modeling literature is summarized (Sect 1.1) and synthetic inquiry is contrasted with scientific inquiry (Sect. 1.2 and 2). Section 2 also explains many fielded StateSim applications and 100s of past acceptability tests and validity assessments. Section 3 then describes how users now construct and run ‘first pass’ country models within hours due to the StateSim Generator, while Section 4 offers two country analyses that illustrate this approach. The conclusions explain lessons learned
    • …
    corecore