461 research outputs found
High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms
Polynomials may be represented sparsely in an effort to conserve memory usage and provide a succinct and natural representation. Moreover, polynomials which are themselves sparse – have very few non-zero terms – will have wasted memory and computation time if represented, and operated on, densely. This waste is exacerbated as the number of variables increases. We provide practical implementations of sparse multivariate data structures focused on data locality and cache complexity. We look to develop high-performance algorithms and implementations of fundamental polynomial operations, using these sparse data structures, such as arithmetic (addition, subtraction, multiplication, and division) and interpolation. We revisit a sparse arithmetic scheme introduced by Johnson in 1974, adapting and optimizing these algorithms for modern computer architectures, with our implementations over the integers and rational numbers vastly outperforming the current wide-spread implementations. We develop a new algorithm for sparse pseudo-division based on the sparse polynomial division algorithm, with very encouraging results. Polynomial interpolation is explored through univariate, dense multivariate, and sparse multivariate methods. Arithmetic and interpolation together form a solid high-performance foundation from which many higher-level and more interesting algorithms can be built
Decomposition Methods for Nonlinear Optimization and Data Mining
We focus on two central themes in this dissertation. The first one is on
decomposing polytopes and polynomials in ways that allow us to perform
nonlinear optimization. We start off by explaining important results on
decomposing a polytope into special polyhedra. We use these decompositions and
develop methods for computing a special class of integrals exactly. Namely, we
are interested in computing the exact value of integrals of polynomial
functions over convex polyhedra. We present prior work and new extensions of
the integration algorithms. Every integration method we present requires that
the polynomial has a special form. We explore two special polynomial
decomposition algorithms that are useful for integrating polynomial functions.
Both polynomial decompositions have strengths and weaknesses, and we experiment
with how to practically use them.
After developing practical algorithms and efficient software tools for
integrating a polynomial over a polytope, we focus on the problem of maximizing
a polynomial function over the continuous domain of a polytope. This
maximization problem is NP-hard, but we develop approximation methods that run
in polynomial time when the dimension is fixed. Moreover, our algorithm for
approximating the maximum of a polynomial over a polytope is related to
integrating the polynomial over the polytope. We show how the integration
methods can be used for optimization.
The second central topic in this dissertation is on problems in data science.
We first consider a heuristic for mixed-integer linear optimization. We show
how many practical mixed-integer linear have a special substructure containing
set partition constraints. We then describe a nice data structure for finding
feasible zero-one integer solutions to systems of set partition constraints.
Finally, we end with an applied project using data science methods in medical
research.Comment: PHD Thesis of Brandon Dutr
Model based fault detection for two-dimensional systems
Fault detection and isolation (FDI) are essential in ensuring safe and reliable operations in industrial
systems. Extensive research has been carried out on FDI for one dimensional (1-D)
systems, where variables vary only with time. The existing FDI strategies are mainly focussed
on 1-D systems and can generally be classified as model based and process history data based
methods. In many industrial systems, the state variables change with space and time (e.g., sheet
forming, fixed bed reactors, and furnaces). These systems are termed as distributed parameter
systems (DPS) or two dimensional (2-D) systems. 2-D systems have been commonly represented
by the Roesser Model and the F-M model. Fault detection and isolation for 2-D systems
represent a great challenge in both theoretical development and applications and only limited
research results are available.
In this thesis, model based fault detection strategies for 2-D systems have been investigated
based on the F-M and the Roesser models. A dead-beat observer based fault detection has been
available for the F-M model. In this work, an observer based fault detection strategy is investigated
for systems modelled by the Roesser model. Using the 2-D polynomial matrix technique,
a dead-beat observer is developed and the state estimate from the observer is then input to a
residual generator to monitor occurrence of faults. An enhanced realization technique is combined
to achieve efficient fault detection with reduced computations. Simulation results indicate
that the proposed method is effective in detecting faults for systems without disturbances as well
as those affected by unknown disturbances.The dead-beat observer based fault detection has been shown to be effective for 2-D systems
but strict conditions are required in order for an observer and a residual generator to exist. These
strict conditions may not be satisfied for some systems. The effect of process noises are also not
considered in the observer based fault detection approaches for 2-D systems. To overcome the
disadvantages, 2-D Kalman filter based fault detection algorithms are proposed in the thesis. A recursive 2-D Kalman filter is applied to obtain state estimate minimizing the estimation
error variances. Based on the state estimate from the Kalman filter, a residual is generated
reflecting fault information. A model is formulated for the relation of the residual with faults
over a moving evaluation window. Simulations are performed on two F-M models and results
indicate that faults can be detected effectively and efficiently using the Kalman filter based fault
detection.
In the observer based and Kalman filter based fault detection approaches, the residual signals
are used to determine whether a fault occurs. For systems with complicated fault information
and/or noises, it is necessary to evaluate the residual signals using statistical techniques. Fault
detection of 2-D systems is proposed with the residuals evaluated using dynamic principal component
analysis (DPCA). Based on historical data, the reference residuals are first generated using
either the observer or the Kalman filter based approach. Based on the residual time-lagged
data matrices for the reference data, the principal components are calculated and the threshold
value obtained. In online applications, the T2 value of the residual signals are compared with
the threshold value to determine fault occurrence. Simulation results show that applying DPCA
to evaluation of 2-D residuals is effective.Doctoral These
Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation
Investigating symbolic mathematical computation using PL/1 FORMAC batch system and Scope FORMAC interactive syste
Structure and Interpretation of Computer Programs
Structure and Interpretation of Computer Programs has had a dramatic impact on computer science curricula over the past decade. This long-awaited revision contains changes throughout the text. There are new implementations of most of the major programming systems in the book, including the interpreters and compilers, and the authors have incorporated many small changes that reflect their experience teaching the course at MIT since the first edition was published. A new theme has been introduced that emphasizes the central role played by different approaches to dealing with time in computational models: objects with state, concurrent programming, functional programming and lazy evaluation, and nondeterministic programming. There are new example sections on higher-order procedures in graphics and on applications of stream processing in numerical programming, and many new exercises. In addition, all the programs have been reworked to run in any Scheme implementation that adheres to the IEEE standard
Accelerated Financial Applications through Specialized Hardware, FPGA
This project will investigate Field Programmable Gate Array (FPGA) technology in financial applications. FPGA implementation in high performance computing is still in its infancy. Certain companies like XtremeData inc. advertized speed improvements of 50 to 1000 times for DNA sequencing using FPGAs, while using an FPGA as a coprocessor to handle specific tasks provides two to three times more processing power. FPGA technology increases performance by parallelizing calculations. This project will specifically address speed and accuracy improvements of both fundamental and transcendental functions when implemented using FPGA technology. The results of this project will lead to a series of recommendations for effective utilization of FPGA technology in financial applications
A Distributed Security Architecture for Large Scale Systems
This thesis describes the research leading from the conception, through development, to the practical
implementation of a comprehensive security architecture for use within, and as a value-added enhancement
to, the ISO Open Systems Interconnection (OSI) model.
The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can
allow any of the ISO recommended security facilities to be provided at any layer of the model. It is
suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications.
For large scale, distributed processing operations, a network of security management centres (SMCs) is
suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided
in an efficient manner.
The background to the OSI standards are covered in detail, followed by an introduction to security in open
systems. A survey of existing techniques in formal analysis and verification is then presented. The
architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed
by an extension of the CSS concept to a large scale network controlled by SMCs.
A new approach to formal security analysis is described which is based on two main methodologies.
Firstly, every function within the system is built from layers of provably secure sequences of finite state
machines, using a recursive function to monitor and constrain the system to the desired state at all times.
Secondly, the correctness of the protocols generated by the sequences to exchange security information
and control data between agents in a distributed environment, is analysed in terms of a modified temporal
Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system
as a result of actions performed by entities within the system, including the notion of timeliness.
The two fundamental problems in number theory upon which the assumptions about the security of the
finite state machine model rest are described, together with a comprehensive survey of the very latest
progress in this area. Having assumed that the two problems will remain computationally intractable in
the foreseeable future, the method is then applied to the formal analysis of some of the components of the
Comprehensive Security System.
A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM
Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out
in Chapter 1. This implementation is described, and finally some comments are made on the possible
future of research into security aspects of distributed systems.IBM (United Kingdom) Laboratories
Hursley Park, Winchester, U
- …