5,158 research outputs found

    WavePacket: A Matlab package for numerical quantum dynamics. II: Open quantum systems, optimal control, and model reduction

    Full text link
    WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schr\"odinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm

    Implementation of generalized optimality criteria in a multidisciplinary environment

    Get PDF
    A generalized optimality criterion method consisting of a dual problem solver combined with a compound scaling algorithm was implemented in the multidisciplinary design tool, ASTROS. This method enables, for the first time in a production design tool, the determination of a minimum weight design using thousands of independent structural design variables while simultaneously considering constraints on response quantities in several disciplines. Even for moderately large examples, the computational efficiency is improved significantly relative to the conventional approach

    POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models

    Get PDF
    The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the "univariate" approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in "multivariate" approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts.

    Square-rich fixed point polynomial evaluation on FPGAs

    Get PDF
    Polynomial evaluation is important across a wide range of application domains, so significant work has been done on accelerating its computation. The conventional algorithm, referred to as Horner's rule, involves the least number of steps but can lead to increased latency due to serial computation. Parallel evaluation algorithms such as Estrin's method have shorter latency than Horner's rule, but achieve this at the expense of large hardware overhead. This paper presents an efficient polynomial evaluation algorithm, which reforms the evaluation process to include an increased number of squaring steps. By using a squarer design that is more efficient than general multiplication, this can result in polynomial evaluation with a 57.9% latency reduction over Horner's rule and 14.6% over Estrin's method, while consuming less area than Horner's rule, when implemented on a Xilinx Virtex 6 FPGA. When applied in fixed point function evaluation, where precision requirements limit the rounding of operands, it still achieves a 52.4% performance gain compared to Horner's rule with only a 4% area overhead in evaluating 5th degree polynomials

    MSB-First Interval-Bounded Variable-Precision RealTime Arithmetic Unit

    Get PDF
    This paper presents a paradigm of real-time processing on the lowest level of computing systems: the arithmetic unit. The arithmetic unit based on this principle containing addition, subtraction, multiplication and division operations is  described.  The  development  of  the  computation  model  is  based  on  the  Soft Computing and the Imprecise Computation paradigms, combined with the MSBFirst  and  the  Interval  Arithmetic  techniques.  Those  paradigms  and  techniques give  the  arithmetic  unit  design  the  ability  to  compute  with  precisions  as  a function  of time available or accuracy needed. The predictability of processing time and result's accuracy are obtained by means of processing granularity of k bits and by  using look-up tables. We present an evaluation of  the operation in time  delay  and  computation  accuracy  that  shows  significant  performance improvement over conventional arithmetic unit architecture, that is,  the ability to produce  intermediate-result  during  execution  time,  to  give  certainty  in computation  accuracy  even  before  the  process  finish  time  by  providing  two intermediate-results,  which  act  as  the  lower  and  upper  bound  of  the  real  and complete computation result, and finally, gain high computation accuracy from the early time of the execution process

    AutoBayes: A System for Generating Data Analysis Programs from Statistical Models

    No full text
    Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but difficult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis system for the generation of data analysis programs from statistical models. A statistical model specifies the properties for each problem variable (i.e., observation or parameter) and its dependencies in the form of a probability distribution. It is a fully declarative problem description, similar in spirit to a set of differential equations. From such a model, AutoBayes generates optimized and fully commented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Code is produced by a schema-guided deductive synthesis process. A schema consists of a code template and applicability constraints which are checked against the model during synthesis using theorem proving technology. AutoBayes augments schema-guided synthesis by symbolic-algebraic computation and can thus derive closed-form solutions for many problems. It is well-suited for tasks like estimating best-fitting model parameters for the given data. Here, we describe AutoBayes's system architecture, in particular the schema-guided synthesis kernel. Its capabilities are illustrated by a number of advanced textbook examples and benchmarks

    The MFIBVP real-time multiplier

    Get PDF
    This paper presents the architecture of the MFIBVP real-time multiplier which is The MFIBVP technique is a combination of the MSB–First computation, the Interval-Bounded Arithmetic and the Variable-Precision computation techniques.The MFIBVP computation guarantees the computation carried out will produce high accuracy from the early computation time, self error estimation and time-optimal computation. This paper shows the performance of the MFIBVP real-time multiplier unit that can gives accuracy of it’s intermediate-result more than 99% since the second phase of its process
    • …
    corecore