277 research outputs found
DNA as a universal substrate for chemical kinetics
Molecular programming aims to systematically engineer molecular and chemical systems of autonomous function and ever-increasing complexity. A key goal is to develop embedded control circuitry within a chemical system to direct molecular events. Here we show that systems of DNA molecules can be constructed that closely approximate the dynamic behavior of arbitrary systems of coupled chemical reactions. By using strand displacement reactions as a primitive, we construct reaction cascades with effectively unimolecular and bimolecular kinetics. Our construction allows individual reactions to be coupled in arbitrary ways such that reactants can participate in multiple reactions simultaneously, reproducing the desired dynamical properties. Thus arbitrary systems of chemical equations can be compiled into real chemical systems. We illustrate our method on the Lotka–Volterra oscillator, a limit-cycle oscillator, a chaotic system, and systems implementing feedback digital logic and algorithmic behavior
Antitrust Damages for Consumer Welfare Loss
Section 4 of the Clayton Act provides that any person who is injured in his business or property by reason of anything forbidden in the antitrust laws shall recover threefold the damages by him sustained. The current private enforcement model usually permits plaintiffs to recover damages based upon the excessive prices charged to consumers. However, economists see the real loss to society from an antitrust violation to be the consumer welfare loss which results from reduced output. The authors have been unable to locate any antitrust case which has permitted recovery of damages for this consumer welfare loss. Therefore, this article addresses the following issues: if consumer welfare loss is the true measure of the damage to society from an antitrust violation, should it be included in a damage recovery; if consumer welfare loss is recoverable, who is the proper party to recover for the loss; and what difficulties are there in measuring such a loss for purposes of awarding damages
Dynamic separation of suspended solids
A pilot study of a dynamic means of separating suspended solids from a two-phase flow is described. The separation is achieved by passing the two-phase flow through an orifice. The difference in mass between the suspended solids and the fluid causes the path lines of the solids to deviate from the fluid streamlines when flowing through the abrupt contraction. Withdrawal of the center portion of the jet issuing from the orifice results in a primary separation of the solids from the fluid. The utility of this separation technique is indicated.Project # A-015-MO Agreement # 14-01-0001-184
Optimization of operation of a system of flood control reservoirs
Students supported: 1 Graduate StudentOptimization of operation of a system of flood control reservoirs is established by the application of mathematical programming. The mathematical procedure is applied to two different types of systems, reservoirs in parallel and reservoirs in tandem. The operational matrix to be optimized is made up of the objective function and the constraining equations. The objective function that is to be maximized is made up of the time sequence of releases from the reservoirs. The physical, structural and hydrological limitations are described by the constraint equations. All equations in the operational matrix are linear. Inflows to the reservoirs of the system and the initial conditions are assumed to be known, as are the reservoir capacities and downstream-channel maximum and minimum capacities. The objective of the operational matrix is to maximize the sum of releases thus minimizing the storage occupied by flood water. Set up of the operational matrix is carried out using a digital computer program and the optimization is carried out by applying the Linear-Programming algorithm of MPS/360. Results of the procedures are shown for a three reservoir system in the Kansas river basis (U.S.A.) using actual data.Project # B-065-MO Agreement # 14-31-0001-360
Numerical Simulation of the 9-10 June 1972 Black Hills Storm Using CSU RAMS
Strong easterly flow of low-level moist air over the eastern slopes of the Black Hills on 9-10 June 1972 generated a storm system that produced a flash flood, devastating the area. Based on observations from this storm event, and also from the similar Big Thompson 1976 storm event, conceptual models have been developed to explain the unusually high precipitation efficiency. In this study, the Black Hills storm is simulated using the Colorado State University Regional Atmospheric Modeling System. Simulations with homogeneous and inhomogeneous initializations and different grid structures are presented. The conceptual models of storm structure proposed by previous studies are examined in light of the present simulations. Both homogeneous and inhomogeneous initialization results capture the intense nature of the storm, but the inhomogeneous simulation produced a precipitation pattern closer to the observed pattern. The simulations point to stationary tilted updrafts, with precipitation falling out to the rear as the preferred storm structure. Experiments with different grid structures point to the importance of removing the lateral boundaries far from the region of activity. Overall, simulation performance in capturing the observed behavior of the storm system was enhanced by use of inhomogeneous initialization
Bistable Gradient Networks II: Storage Capacity and Behaviour Near Saturation
We examine numerically the storage capacity and the behaviour near saturation
of an attractor neural network consisting of bistable elements with an
adjustable coupling strength, the Bistable Gradient Network (BGN). For strong
coupling, we find evidence of a first-order "memory blackout" phase transition
as in the Hopfield network. For weak coupling, on the other hand, there is no
evidence of such a transition and memorized patterns can be stable even at high
levels of loading. The enhanced storage capacity comes, however, at the cost of
imperfect retrieval of the patterns from corrupted versions.Comment: 15 pages, 12 eps figures. Submitted to Phys. Rev. E. Sequel to
cond-mat/020356
On the validity of entropy production principles for linear electrical circuits
We discuss the validity of close-to-equilibrium entropy production principles
in the context of linear electrical circuits. Both the minimum and the maximum
entropy production principle are understood within dynamical fluctuation
theory. The starting point are Langevin equations obtained by combining
Kirchoff's laws with a Johnson-Nyquist noise at each dissipative element in the
circuit. The main observation is that the fluctuation functional for time
averages, that can be read off from the path-space action, is in first order
around equilibrium given by an entropy production rate. That allows to
understand beyond the schemes of irreversible thermodynamics (1) the validity
of the least dissipation, the minimum entropy production, and the maximum
entropy production principles close to equilibrium; (2) the role of the
observables' parity under time-reversal and, in particular, the origin of
Landauer's counterexample (1975) from the fact that the fluctuating observable
there is odd under time-reversal; (3) the critical remark of Jaynes (1980)
concerning the apparent inappropriateness of entropy production principles in
temperature-inhomogeneous circuits.Comment: 19 pages, 1 fi
Massively parallel computing on an organic molecular layer
Current computers operate at enormous speeds of ~10^13 bits/s, but their
principle of sequential logic operation has remained unchanged since the 1950s.
Though our brain is much slower on a per-neuron base (~10^3 firings/s), it is
capable of remarkable decision-making based on the collective operations of
millions of neurons at a time in ever-evolving neural circuitry. Here we use
molecular switches to build an assembly where each molecule communicates-like
neurons-with many neighbors simultaneously. The assembly's ability to
reconfigure itself spontaneously for a new problem allows us to realize
conventional computing constructs like logic gates and Voronoi decompositions,
as well as to reproduce two natural phenomena: heat diffusion and the mutation
of normal cells to cancer cells. This is a shift from the current static
computing paradigm of serial bit-processing to a regime in which a large number
of bits are processed in parallel in dynamically changing hardware.Comment: 25 pages, 6 figure
- …