117,736 research outputs found
System balance analysis for vector computers
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data
Recommended from our members
Computer-aided programming for multiprocessing systems
As both the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. This report discusses parallel models of computation and tools for computer-aided programming (CAP). Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. In particular, a CAP tool, named Hypertool, is described here. It performs scheduling and handles the communication primitive insertion automatically so that many errors are eliminated. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs. Experiments have shown that up to a 300% performance improvement can be achieved by computer-aided programming
From discretization to regularization of composite discontinuous functions
Discontinuities between distinct regions, described by different equation sets, cause difficulties for PDE/ODE solvers. We present a new algorithm that eliminates integrator discontinuities through regularizing discontinuities. First, the algorithm determines the optimum switch point between two functions spanning adjacent or overlapping domains. The optimum switch point is determined by searching for a “jump point” that minimizes a discontinuity between adjacent/overlapping functions. Then, discontinuity is resolved using an interpolating polynomial that joins the two discontinuous functions.
This approach eliminates the need for conventional integrators to either discretize and then link discontinuities through generating interpolating polynomials based on state variables or to reinitialize state variables when discontinuities are detected in an ODE/DAE system. In contrast to conventional approaches that handle discontinuities at the state variable level only, the new approach tackles discontinuity at both state variable and the constitutive equations level. Thus, this approach eliminates errors associated with interpolating polynomials generated at a state variable level for discontinuities occurring in the constitutive equations.
Computer memory space requirements for this approach exponentially increase with the dimension of the discontinuous function hence there will be limitations for functions with relatively high dimensions. Memory availability continues to increase with price decreasing so this is not expected to be a major limitation
Type-II Quantum Algorithms
We review and analyze the hybrid quantum-classical NMR computing methodology
referred to as Type-II quantum computing. We show that all such algorithms
considered so far within this paradigm are equivalent to some classical
lattice-Boltzmann scheme. We derive a sufficient and necessary constraint on
the unitary operator representing the quantum mechanical part of the
computation which ensures that the model reproduces the Boltzmann approximation
of a lattice-gas model satisfying semi-detailed balance. Models which do not
satisfy this constraint represent new lattice-Boltzmann schemes which cannot be
formulated as the average over some underlying lattice gas. We close the paper
with some discussion of the strengths, weaknesses and possible future direction
of Type-II quantum computing.Comment: To appear in Physica
Ecodesign of Batch Processes: Optimal Design Strategies for Economic and Ecological Bioprocesses
This work deals with the multicriteria cost-environment design of multiproduct batch plants, where the design variables are the equipment item sizes as well as the operating conditions. The case study is a multiproduct batch plant for the production of four recombinant proteins. Given the important combinatorial aspect of the problem, the approach used consists in coupling a stochastic algorithm, indeed a Genetic Algorithm (GA) with a Discrete Event Simulator (DES). To take into account the conflicting situations that may be encountered at the earliest stage of batch plant design, i.e. compromise situations between cost and environmental consideration, a Multicriteria Genetic Algorithm (MUGA) was developed with a Pareto optimal ranking method. The results show how the methodology can be used to find a range of trade-off solutions for optimizing batch plant design
Optimal greenhouse cultivation control: survey and perspectives
Abstract: A survey is presented of the literature on greenhouse climate control, positioning the various solutions and paradigms in the framework of optimal control. A separation of timescales allows the separation of the economic optimal control problem of greenhouse cultivation into an off-line problem at the tactical level, and an on-line problem at the operational level. This paradigm is used to classify the literature into three categories: focus on operational control, focus on the tactical level, and truly integrated control. Integrated optimal control warrants the best economical result, and provides a systematic way to design control systems for the innovative greenhouses of the future. Research issues and perspectives are listed as well
Group implicit concurrent algorithms in nonlinear structural dynamics
During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers
- …