95 research outputs found

    Time-Dependent Quotas for the Southern Bluefin Tuna Fishery

    Get PDF
    It is now officially recognized by the governments of Australia and Japan that the southern bluefin tuna fishery has been overexploited and that harvests must be controlled. A dynamic programming model applicable to multicohort fisheries is developed for finding approximately optimal time-dependent quotas. Results from applying the model to the southern bluefin tuna fishery indicate that restricting or eliminating the Australian catch of under 4-year-olds would benefit both countries.Environmental Economics and Policy, International Relations/Trade, Resource /Energy Economics and Policy,

    Stranded Capital in Fisheries: The Pacific Coast Groundfish/Whiting Case

    Get PDF
    Current rationalization options for West Coast groundfish trawl fisheries include significant allocations of harvester quota to processors, justified as compensation for “stranded capital.†This article discusses the origin of the concept of stranded capital, its use in other policy settings, preconditions, measurement, and remedies for addressing it. Our main finding is that rationalization of fisheries is unlikely to generate significant processing stranded capital. Most capital involved in fisheries processing is malleable and not likely to be devalued as a result of rationalization. If policy makers nevertheless judge it desirable to consider compensation, a legitimate process would tie compensation to anticipated or demonstrated capital losses. Current policies proposed on the U.S. West Coast to transfer harvester quota are arbitrary and unsupported by empirical estimates of the magnitude of the problem. They are likely to generate important spillover effects that could negate some of the intended benefits of rationalization.Stranded capital, rationalization, malleable capital, processor compensation, Industrial Organization, Political Economy, Q22,

    Non-Nuclear, Low-Carbon, or Both? The Case of Taiwan

    Get PDF
    The Fukushima nuclear accident in Japan has renewed debates on the safety of nuclear power, possibly hurting the role of nuclear power in efforts to limit CO2 emissions. I develop a dynamic economy-wide model of Taiwan with a detailed set of technology options in the power sector to examine the implications of adopting different nuclear power policies on CO2 emissions and the economy. Absent a carbon mitigation target, limiting nuclear power has a small economic cost for Taiwan, but CO2 emissions may increase by more than 3.5% by 2035 when nuclear is replaced by fossil-based generation. With a low-carbon target of a 50% reduction from year 2000 levels by 2050, if the nuclear option and carbon sequestration are not viable, gas-fired power would provide almost 90% of electricity output due to the limited renewable resources. In particular, wind power would account for 1.6% to 4.9% of that output, depending on how it relies on other back-up capacities. With both non-nuclear and low-carbon policies, deploying carbon sequestration on fossil-based generation can significantly reduce the negative GDP impact on the economy. Lastly, lowering carbon mitigation costs further is possible with expanded nuclear capacity.The author gratefully acknowledges comments and suggestions from John Reilly, Frank Hsiao, Audrey Resutek, and seminar participants of the MIT EPPA meeting, and the financial support from the MIT Joint Program on the Science and Policy of Global Change. The Program is funded by the following institutions under various grants: the U.S. Department of Energy; the U.S. Environmental Protection Agency; the U.S. National Science Foundation; the U.S. National Aeronautics and Space Administration; the U.S. National Oceanic and Atmospheric Administration; the U.S. Federal Aviation Administration; the Electric Power Research Institute; and a consortium of industrial and foundation sponsors (for complete list see http://globalchange.mit.edu/sponsors/all)

    Securing the Foundations of Practical Information Flow Control

    Get PDF
    Language-based information flow control (IFC) promises to secure computer programs against malicious or incompetent programmers by addressing key shortcomings of modern programming languages. In spite of showing great promise, the field remains under-utilised in practise. This thesis makes contributions to the theoretical foundations of IFC aimed at making the techniques practically applicable. The paper addresses two primary topics, IFC as a library and IFC without false alarms. The contributions range from foundational observations about soundness and completeness, to practical considerations of efficiency and expressiveness

    Journey Beyond Full Abstraction: Exploring Robust Property Preservation for Secure Compilation

    Get PDF
    —Good programming languages provide helpful abstractions for writing secure code, but the security properties of the source language are generally not preserved when compiling a program and linking it with adversarial code in a low-level target language (e.g., a library or a legacy application). Linked target code that is compromised or malicious may, for instance, read and write the compiled program’s data and code, jump to arbitrary memory locations, or smash the stack, blatantly violating any source-level abstraction. By contrast, a fully abstract compilation chain protects source-level abstractions all the way down, ensuring that linked adversarial target code cannot observe more about the compiled program than what some linked source code could about the source program. However, while research in this area has so far focused on preserving observational equivalence, as needed for achieving full abstraction, there is a much larger space of security properties one can choose to preserve against linked adversarial code. And the precise class of security properties one chooses crucially impacts not only the supported security goals and the strength of the attacker model, but also the kind of protections a secure compilation chain has to introduce. We are the first to thoroughly explore a large space of formal secure compilation criteria based on robust property preservation, i.e., the preservation of properties satisfied against arbitrary adversarial contexts. We study robustly preserving various classes of trace properties such as safety, of hyperproperties such as noninterference, and of relational hyperproperties such as trace equivalence. This leads to many new secure compilation criteria, some of which are easier to practically achieve and prove than full abstraction, and some of which provide strictly stronger security guarantees. For each of the studied criteria we propose an equivalent “property-free” characterization that clarifies which proof techniques apply. For relational properties and hyperproperties, which relate the behaviors of multiple programs, our formal definitions of the property classes themselves are novel. We order our criteria by their relative strength and show several collapses and separation results. Finally, we adapt existing proof techniques to show that even the strongest of our secure compilation criteria, the robust preservation of all relational hyperproperties, is achievable for a simple translation from a statically typed to a dynamically typed language

    EasyUC: using EasyCrypt to mechanize proofs of universally composable security

    Get PDF
    We present a methodology for using the EasyCrypt proof assistant (originally designed for mechanizing the generation of proofs of game-based security of cryptographic schemes and protocols) to mechanize proofs of security of cryptographic protocols within the universally composable (UC) security framework. This allows, for the first time, the mechanization and formal verification of the entire sequence of steps needed for proving simulation-based security in a modular way: Specifying a protocol and the desired ideal functionality; Constructing a simulator and demonstrating its validity, via reduction to hard computational problems; Invoking the universal composition operation and demonstrating that it indeed preserves security. We demonstrate our methodology on a simple example: stating and proving the security of secure message communication via a one-time pad, where the key comes from a Diffie-Hellman key-exchange, assuming ideally authenticated communication. We first put together EasyCrypt-verified proofs that: (a) the Diffie-Hellman protocol UC-realizes an ideal key-exchange functionality, assuming hardness of the Decisional Diffie-Hellman problem, and (b) one-time-pad encryption, with a key obtained using ideal key-exchange, UC-realizes an ideal secure-communication functionality. We then mechanically combine the two proofs into an EasyCrypt-verified proof that the composed protocol realizes the same ideal secure-communication functionality. Although formulating a methodology that is both sound and workable has proven to be a complex task, we are hopeful that it will prove to be the basis for mechanized UC security analyses for significantly more complex protocols and tasks.Accepted manuscrip

    The Effects of ITQ Management on Fishermen’s Welfare When the Processing Sector is Imperfectly Competitive

    Get PDF
    In this paper we use a general model of imperfect competition to predict welfare changes within an open-access fishery transitioning to individual transferable quota (ITQ) management. Although related research has explored the effects of market power in the harvesting sector on ITQ performance, none have considered the implications of an imperfectly competitive processing sector. This study addresses this question specifically in the context of the Atlantic herring fishery, although its implications are relevant to all fisheries with similar industry structure. Our results show that ITQs could have a negative impact on fishermen’s welfare when processors have market power and the cap on aggregate harvest is binding or becomes binding with the implementation of ITQs.ITQ, imperfect competition, welfare analysis, fisheries
    • …
    corecore