51 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe problem of pollution is not going away. As global Gross Domestic Product (GDP) rises, so does pollution. Due to the existence of environmental externalities, polluting firms lack the incentive to abate their pollution, and without regulations, markets do not adequately control pollution. While regulators are responsible for enacting regulations, the firms ultimately determine the environmental outcomes through their production decisions. Furthermore, polluting industries are typically large and concentrated, raising the concern that market power may be present in these industries. In this dissertation, we study the interactions between powerful, strategic, firms operating under pollution regulations and the regulator when markets are imperfectly competitive. An important contribution of this work is our integrated pollution-production model, which incorporates the firms' emissions, abatement technologies, the damage from pollution, and three widely-used regulatory mechanisms-Cap, Cap-and-Trade, and Tax. The firms compete with each other and control prices by setting their production quantities. In our model, the firms have many options to comply with the pollution constraints enforced by the regulator, including abating pollution, reducing output, trading in emission allowances, paying emission taxes, investing in abatement innovations, colluding, and combining some of these options. Following the introduction in Chapter 1, we address three broad questions in three separate chapters. • Chapter 2: What is the e↵ect of the pollution control mechanisms on firms, consumers, and society as a whole? Which mechanisms and policies should regulators use to control pollution in a fair, e↵ective, and practical manner? • Chapter 3: Does Cap-and-Trade enable collusion? If it does, what are the e↵ects of collusion? • Chapter 4: Which mechanisms encourage more investments in abatement innovations? Our results apply to di↵erent types of pollutants and market structures. Our research provides guidelines for both policy-makers and regulated firms

    OBJECT ORIENTED MODELING: MEANS FOR DEALING WITH SYSTEM COMPLEXITY

    Get PDF
    Abstract This paper presents the concepts of and ideas behind the object oriented modeling paradigm in the context of rapid prototyping of complex physical system designs. It is shown that object oriented modeling software is an essential tool in exible manufacturing, which helps reduce both the cost and the time needed to manufacture customized goods using pre fabricated components

    On a bi-layer shallow-water problem

    Get PDF
    Abstract In this paper, we prove an existence and uniqueness result for a bi-layer shallow water model in depth-mean velocity formulation. Some smoothness results for the solution are also obtained. In a previous work we proved the same results for a one-layer problem. Now the di culty arises from the terms coupling the two layers. In order to obtain the energy estimate, we use a special basis which allows us to bound these terms.

    Object Oriented Modeling of Hybrid Systems

    Get PDF
    ABSTRACT A new methodology for the object oriented description of models consisting of a mixture of continuous and discrete components is presented. The object oriented paradigm enables the user to describe such models in a modular fashion that permits the reuse of these models independently of the environment in which they are to be embedded. The paper explains the basic mechanisms needed for object oriented modeling of hybrid systems by means of language constructs available in the object oriented modeling language Dymola. It then addresses more advanced concepts such a s v ariable structure models containing e.g. ideal electrical switches, ideal diodes and dry friction

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    Probabilistic Model-Based Safety Analysis

    Full text link
    Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of probabilistic failure occurrence is shown as well as accurate interpretation of the results of the analysis. This allows for reliable and expressive assessment of the safety of a system in early design stages

    R-linearizability: An Extension of Linearizability to Replicated Objects

    No full text
    The paper extends linearizability, a consistency criterion for concurrent systems, to the replicated context, where availability and performance are enhanced by using redundant objects. The mode of operation on sets of replicas and the consistency criterion of Rlinearizability are defined. An implementation of Rlinearizable replicated atoms (on which only read and write operations are defined) is described. It is realized in the virtually synchronous model, based on a group view mechanism. This framework provides reliable multicast primitives enabling a fault-tolerant implementation. 1 Introduction Two problems are of growing importance as distributed systems become larger, with more objects, more cooperation among users, and new patterns of connectivity: maintaining consistency and managing replication of shared objects. Maintaining consistency requires to control concurrent access by different processes to the objects and to ensure atomicity in the presence of failures. Replicatio..

    Specification of the PVMVis visualisation tool

    No full text
    Introduction The PVMVis visualisation tool is one of the three tools of the EDPEPPS project. The issue of this tool is to offer to the user graphical views representing the execution of the designed parallel application. Graphical views permits thw visualisation of the behaviour and the performance of a parallel application according to the time, the design and the platform. The visualisation is driven by a trace file generated by the simulator engine or a real execution and by the PVMGL file generated by PVMGraph [4]. This post-mortem analysis takes place after the simulation or the real execution and offers to the user several possibilities for performance evaluation. The concept used for the visualisation is an event-based animation (Fig. 1). The event trace animation is useful to understand the behaviour of the parallel application. Thanks to this view it is possible to focus on wrong behaviour like deadlocks or bottlenecks. Inside this animation concept, there are three

    Differentiability With Respect to Initial Data for a Scalar Conservation Law

    No full text
    . We linearize a scalar conservation law around an entropy initial datum. The resulting equation is a linear conservation law with discontinuous coefficient, solved in the context of duality solutions, for which existence and uniqueness hold. We interpret these solutions as weak derivatives with respect to the initial data for the nonlinear equation. 1. Introduction Consider the one-dimensional scalar conservation law @ t u + @ x f(u) = 0; 0 ! t ! T ; x 2 R; (1) where f is a C 1 convex function, provided with entropy admissible initial data u ffi 2 L 1 (R). Kruzkov's results [4] assert that the entropy solution u to (1) lies in L 1 (]0; T [\ThetaR) " C(0; T ; L 1 loc (R)), and that the following contraction property holds: if u (resp. v) corresponds to the initial data u ffi (resp. v ffi ), then for all R ? 0 and any t ? 0 Z jxjR ju(t; x) \Gamma v(t; x)j dx Z jxjR+Mt ju ffi (x) \Gamma v ffi (x)j dx; (2) where M = maxfjf 0 (s)j; jsj max(ku ffi kL 1 ; kv..

    Stepwise Refinement of Control Software - A Case Study Using RAISE

    No full text
    . We develop a control program for a realistic automation problem by stepwise refinement. We focus on exemplifying appropriate levels of abstraction for the refinement steps. By using phases as a means for abstraction, safety requirements are specified on a high level of abstraction and can be verified using process algebra. The case study is carried out using the RAISE specification language, and we report on some experiences using the RAISE tool set. 1 Introduction For safety-critical software, like that which controls the machines of a production plant, the demands on reliability and correctness are particularly high. An erroneous control program of, say, a robot may cause considerable damage to the machines themselves, or may even threaten human lives. Careful design of the control software is therefore most important. However, due to the complexity of these applications, it becomes impossible to deal at the same time with all details of the devices involved. In the stepwise refin..
    • …
    corecore