654 research outputs found
Research on fully distributed data processing systems
Issued as Quarterly progress reports, nos. 1-11, and Project report, Project no. G-36-64
Revisiting Actor Programming in C++
The actor model of computation has gained significant popularity over the
last decade. Its high level of abstraction makes it appealing for concurrent
applications in parallel and distributed systems. However, designing a
real-world actor framework that subsumes full scalability, strong reliability,
and high resource efficiency requires many conceptual and algorithmic additives
to the original model.
In this paper, we report on designing and building CAF, the "C++ Actor
Framework". CAF targets at providing a concurrent and distributed native
environment for scaling up to very large, high-performance applications, and
equally well down to small constrained systems. We present the key
specifications and design concepts---in particular a message-transparent
architecture, type-safe message interfaces, and pattern matching
facilities---that make native actors a viable approach for many robust,
elastic, and highly distributed developments. We demonstrate the feasibility of
CAF in three scenarios: first for elastic, upscaling environments, second for
including heterogeneous hardware like GPGPUs, and third for distributed runtime
systems. Extensive performance evaluations indicate ideal runtime behaviour for
up to 64 cores at very low memory footprint, or in the presence of GPUs. In
these tests, CAF continuously outperforms the competing actor environments
Erlang, Charm++, SalsaLite, Scala, ActorFoundry, and even the OpenMPI.Comment: 33 page
Cogitator : a parallel, fuzzy, database-driven expert system
The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.KMBT_22
Effective interprocess communication (IPC) in a real-time transputer network
The thesis describes the design and implementation of an interprocess communication (IPC)
mechanism within a real-time distributed operating system kernel (RT-DOS) which is
designed for a transputer-based network. The requirements of real-time operating systems
are examined and existing design and implementation strategies are described. Particular
attention is paid to one of the object-oriented techniques although it is concluded that these
techniques are not feasible for the chosen implementation platform. Studies of a number of
existing operating systems are reported. The choices for various aspects of operating system
design and their influence on the IPC mechanism to be used are elucidated. The actual design
choices are related to the real-time requirements and the implementation that has been
adopted is described. [Continues.
An Application Perspective on High-Performance Computing and Communications
We review possible and probable industrial applications of HPCC focusing on the software and hardware issues. Thirty-three separate categories are illustrated by detailed descriptions of five areas -- computational chemistry; Monte Carlo methods from physics to economics; manufacturing; and computational fluid dynamics; command and control; or crisis management; and multimedia services to client computers and settop boxes. The hardware varies from tightly-coupled parallel supercomputers to heterogeneous distributed systems. The software models span HPF and data parallelism, to distributed information systems and object/data flow parallelism on the Web. We find that in each case, it is reasonably clear that HPCC works in principle, and postulate that this knowledge can be used in a new generation of software infrastructure based on the WebWindows approach, and discussed in an accompanying paper
A Local Area Network to facilitate office automation in the Administrative Sciences Department.
http://archive.org/details/localareanetwork00peckN
Transactions and data management in NoSQL cloud databases
NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases.
It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance
- …