1,617 research outputs found

    Computer Assistance for Discovering\u27\u27 Formulas in System Engineering and Operator Theory

    Get PDF
    The objective of this paper is two-fold. First we present a methodology for using a combination of computer assistance and human intervention to discover highly algebraic theorems in operator, matrix, and linear systems engineering theory. Since the methodology allows limited human intervention, it is slightly less rigid than an algorithm. We call it a strategy. The second objective is to illustrate the methodology by deriving four theorems. The presentation of the methodology is carried out in three steps. The first step is introducing an abstraction of the methodology which we call an idealized strategy. This abstraction facilitates a high level discussion of the ideas involved. Idealized strategies cannot be implemented on a computer. The second and third steps introduce approximations of these abstractions which we call prestrategy and strategy, respectively. A strategy is more general than a prestrategy and, in fact, every prestrategy is a strategy. The above mentioned approximations are implemented on a computer. We stress that, since there is a computer implementation, the reader can usethese techniques to attack their own algebra problems. Thus the paper might be of both practical and theoretical interest to analysts, engineers, and algebraists. Now we give the idea of a prestrategy. A prestrategy relies almost entirely on two commands which we call NCProcess1 and NCProcess2. These two commands are sufficiently powerful so that, in many cases, when one applies them repeatedly to a complicated collection of equations, they transform the collection of equations into an equivalent but substantially simpler collection of equations. A loose description of a prestrategy applied to a list of equations is: (1) Declare which variables are known and which are unknown. At the beginning of a prestrategy, the order in which the equations are listed is not important, since NCProcess1 and NCProcess2 will reorder them so that the simplest ones appear first. (2) Apply NCProcess1 to the equations; the output is a set of equations, usually some in fewer unknowns than before, carefully partitioned based upon which unknowns they contain. (3) The user must select “important equations,” especially any which solve for an unknown, say x. (When an equation is declared to be important or a variable is switched from being an unknown to being a known, then the way in which NCProcess1 and NCProcess2 reorder the equations is modified.) (4) Switch x to being known rather than unknown. Go to (2) above or stop. When this procedure stops, it hopefully gives the “canonical” necessary conditions for the original equations to have a solution. As a final step we run NCProcess2 which aggressively eliminates redundant equations and partitions the output equations in a way which facilitates proving that the necessary conditions are also sufficient. Many classical theorems in analysis can be viewed in terms of solving a collection of equations. We have found that this procedure actually discovers the classic theorem in a modest collection of classic cases involving factorization of engineering systems and matrix completion problems. One might regard the question of which classical theorems in analysis can be proven with a strategy as an analog of classical Euclidean geometry where a major question was what can be constructed with a compass and ruler. Here the goal is to determine which theorems in systems and operator theory could be discovered by repeatedly applying NCProcess1 and NCProcess2 (or their successors) and the (human) selection of equations which are important. The major practical challenge addressed here is finding operations which, when implemented in software, present the user with crucial algebraic information about his problem while not overwhelming him with too much redundant information. This paper consists of two parts. A description of strategies, a high-level description of the algorithms, a description of the applications to operator, matrix, and linear system engineering theory, and a description of how one would use a strategy to “discover” four different theorems are presented in the first part of the paper. Thus, one who seeks a conventional viewpoint for this rather unconventional paper might think of this as providing a unified proof of four different theorems. The theorems were selected for their diverse proofs and because they are widely known (so that many readers should be familiar with at least one of them). The NCProcess commands use noncommutative Gröbner Basis algorithms which have emerged in the last decade, together with algorithms for removing redundant equations and a method for assisting a mathematician in writing a (noncommutative) polynomial as a composition of polynomials. The reader needs to know nothing about Gröbner Basis to understand the first part of this paper. Descriptions involving the theory of Gröbner Basis appear in the second part of the paper

    Computer Assistance In Discovering Formulas And Theorems In System Engineering II

    Get PDF
    [HSWcdc94] focused on procedures for simplifying complicated expressions automatically. [HScdc95] turned to the adventurous pursuit of developing a highly computer assisted method for “discovering” certain types of formulas and theorems. It is often the case that some variables in the formulation of a problem are not the natural “coordinates” for solution of the problem. Gröbner Basis Algorithms, which lie at the core of our method, are very good at eliminating unknowns, but have no way of finding good changes of variables. This paper gives a way of incorporating changes of variables into our method. As an example, we “discover” the DGKF equations of H∞ control

    Noncommutative Computer Algebra in the Control of Singularly Perturbed Dynamical Systems

    Get PDF
    Most algebraic calculations which one sees in linear systems theory, for example in IEEE TAC, involve block matrices and so are highly noncommutative. Thus conventional commutative computer algebra packages, as in Mathematica and Maple, do not address them. Here we investigate the usefulness of noncommutative computer algebra in a particular area of control theory-singularly perturbed dynamic systems-where working with the noncommutative polynomials involved is especially tedious. Our conclusion is that they have considerable potential for helping practitioners with such computations. For example, the methods introduced here take the most standard textbook singular perturbation calculation, [KK086], one step further than had been done previously. Commutative Groebner basis algorithms are powerful and make up the engines in symbolic algebra packages’ Solve commands. Noncommutative Groebner basis algorithms are more recent, but we shall see that they are useful in manipulating the messy sets of noncommutative polynomial equations which arise in singular perturbation calculations. We use the noncommutative algebra package NCAlgebra and the noncommutative Groebner basis package NCGB which runs under it

    Multiparticle angular correlations: a probe for the sQGP at RHIC

    Full text link
    A novel decomposition technique is used to extract the centrality dependence of di-jet properties and yields from azimuthal correlation functions obtained in Au+Au collisions at sNN\sqrt{s_{_{\rm NN}}}=200 GeV. The width of the near-side jet shows very little dependence on centrality. In contrast, the away-side jet indicates substantial broadening as well as hints for for a local minimum at Δϕ=π\Delta \phi=\pi for central and mid-central events. The yield of jet-pairs (per trigger particle) slowly increases with centrality for both the near- and away-side jets. These observed features are compatible with several recent theoretical predictions of possible modifications of di-jet fragmentation by a strongly interacting medium. Several new experimental approaches, including the study of flavor permutation and higher order multi-particle correlations, that might help to distinguish between different theoretical scenarios are discussed.Comment: Proceedings of the MIT workshop on correlations and fluctation

    Parton energy loss limits and shadowing in Drell-Yan dimuon production

    Get PDF
    A precise measurement of the ratios of the Drell-Yan cross section per nucleon for an 800 GeV/c proton beam incident on Be, Fe and W targets is reported. The behavior of the Drell-Yan ratios at small target parton momentum fraction is well described by an existing fit to the shadowing observed in deep-inelastic scattering. The cross section ratios as a function of the incident parton momentum fraction set tight limits on the energy loss of quarks passing through a cold nucleus

    Femtosecond photodissociation dynamics of 1,4-diiodobenzene by gas-phase X-ray scattering and photoelectron spectroscopy

    Get PDF
    We present a multifaceted investigation into the initial photodissociation dynamics of 1,4-diiodobenzene (DIB) following absorption of 267 nm radiation. We combine ultrafast time-resolved photoelectron spectroscopy and X-ray scattering experiments performed at the Linac Coherent Light Source (LCLS) to study the initial electronic excitation and subsequent rotational alignment, and interpret the experiments in light of Complete Active Space Self-Consistent Field (CASSCF) calculations of the excited electronic landscape. The initially excited state is found to be a bound 1B1 surface, which undergoes ultrafast population transfer to a nearby state in 35 ± 10 fs. The internal conversion most likely leads to one or more singlet repulsive surfaces that initiate the dissociation. This initial study is an essential and prerequisite component of a comprehensive study of the complete photodissociation pathway(s) of DIB at 267 nm. Assignment of the initially excited electronic state as a bound state identifies the mechanism as predissociative, and measurement of its lifetime establishes the time between excitation and initiation of dissociation, which is crucial for direct comparison of photoelectron and scattering experiments.</p

    Measurement of Angular Distributions of Drell-Yan Dimuons in p+pp + p Interactions at 800 GeV/c

    Full text link
    We report a measurement of the angular distributions of Drell-Yan dimuons produced using an 800 GeV/c proton beam on a hydrogen target. The polar and azimuthal angular distribution parameters have been extracted over the kinematic range 4.5<mΌΌ<154.5 < m_{\mu \mu} < 15 GeV/c2^2 (excluding the ΄\Upsilon resonance region), 0<pT<40 < p_T < 4 GeV/c, and 0<xF<0.80 < x_F < 0.8. The p+pp+p angular distributions are similar to those of p+dp+d, and both data sets are compared with models which attribute the cos⁥2ϕ\cos 2 \phi distribution either to the presence of the transverse-momentum-dependent Boer-Mulders structure function h1⊄h_1^\perp or to QCD effects. The data indicate the presence of both mechanisms. The validity of the Lam-Tung relation in p+pp+p Drell-Yan is also tested.Comment: 4 pages, 3 figure
    • 

    corecore