991 research outputs found
SIMULATION OF A MULTIPROCESSOR COMPUTER SYSTEM
The introduction of computers and software engineering in telephone
switching systems has dictated the need for powerful design aids
for such complex systems. Among these design aids simulators -
real-time environment simulators and flat-level simulators - have
been found particularly useful in stored program controlled switching
systems design and evaluation. However, both types of simulators
suffer from certain disadvantages.
An alternative methodology to the simulation of stored program
controlled switching systems is proposed in this research. The
methodology is based on the development of a process-based multilevel
hierarchically structured software simulator. This methodology
eliminates the disadvantages of environment and flat-level simulators.
It enables the modelling of the system in a 1 to 1 transformation
process retaining the sub-systems interfaces and, hence, making it
easier to see the resemblance between the model and modelled system
and to incorporate design modifications and/or additions in the
simulator.
This methodology has been applied in building a simulation package
for the System X family of exchanges. The Processor Utility Sub-system
used to control the exchanges is first simulated, verified and validated.
The application sub-systems models are then added one level higher_,
resulting in an open-ended simulator having sub-systems models at
different levels of detail and capable of simulating any member of the
System X family of exchanges. The viability of the methodology is
demonstrated by conducting experiments to tune the real-time operating
system and by simulating a particular exchange - The Digital Main
Network Switching Centre - in order to determine its performance
characteristics.The General Electric Company Ltd,
GEC Hirst Research Cent,
Wemble
Functional Programming for Embedded Systems
Embedded Systems application development has traditionally been carried out in low-level machine-oriented programming languages like C or Assembler that can result in unsafe, error-prone and difficult-to-maintain code. Functional programming with features such as higher-order functions, algebraic data types, polymorphism, strong static typing and automatic memory management appears to be an ideal candidate to address the issues with low-level languages plaguing embedded systems. However, embedded systems usually run on heavily memory-constrained devices with memory in the order of hundreds of kilobytes and applications running on such devices embody the general characteristics of being (i) I/O- bound, (ii) concurrent and (iii) timing-aware. Popular functional language compilers and runtimes either do not fare well with such scarce memory resources or do not provide high-level abstractions that address all the three listed characteristics. This work attempts to address this gap by investigating and proposing high-level abstractions specialised for I/O-bound, concurrent and timing-aware embedded-systems programs. We implement the proposed abstractions on eagerly-evaluated, statically-typed functional languages running natively on microcontrollers. Our contributions are divided into two parts - Part 1 presents a functional reactive programming language - Hailstorm - that tracks side effects like I/O in its type system using a feature called resource types. Hailstorm’s programming model is illustrated on the GRiSP microcontroller board.Part 2 comprises two papers that describe the design and implementation of Synchron, a runtime API that provides a uniform message-passing framework for the handling of software messages as well as hardware interrupts. Additionally, the Synchron API supports a novel timing operator to capture the notion of time, common in embedded applications. The Synchron API is implemented as a virtual machine - SynchronVM - that is run on the NRF52 and STM32 microcontroller boards. We present programming examples that illustrate the concurrency, I/O and timing capabilities of the VM and provide various benchmarks on the response time, memory and power usage of SynchronVM
Event and Time-Triggered Control Module Layers for Individual Robot Control Architectures of Unmanned Agricultural Ground Vehicles
Automation in the agriculture sector has increased to an extent where the accompanying methods for unmanned field management are becoming more economically viable. This manifests in the industry’s recent presentation of conceptual cab-less machines that perform all field operations under the high-level task control of a single remote operator. A dramatic change in the overall workflow for field tasks that historically assumed the presence of a human in the immediate vicinity of the work is predicted. This shift in the entire approach to farm machinery work provides producers increased control and productivity over high-level tasks and less distraction from operating individual machine actuators and implements. The final implication is decreased mechanical complexity of the cab-less field machines from their manned counter types.
An Unmanned Agricultural Ground Vehicle (UAGV) electric platform received a portable control module layer (CML) which was modular and able to accept higher-level mission commands while returning system states to high-level tasks. The simplicity of this system was shown by its entire implementation running on microcontrollers networked on a Time-Triggered Controller Area Network (TTCAN) bus. A basic form of user input and output was added to the system to demonstrate a simple instance of sub-system integration. In this work, all major levels of design and implementation are examined in detail, revealing the ‘why’ and ‘how’ of each subsystem. System design philosophy is highlighted from the beginning. A state-space feedback steering controller was implemented on the machine utilizing a basic steering model found in literature.
Finally, system performance is evaluated from the perspectives of a number of disciplines including: embedded systems software design, control systems, and robot control architecture. Recommendations for formalized UAGV system modeling, estimation, and control are discussed for the continuation of research in simplified low-cost machines for in-field task automation. Additional recommendations for future time-triggered CML experiments in bus robustness and redundancy are discussed. The work presented is foundational in the shift from event-triggered communications towards time-triggered CMLs for unmanned agricultural machinery and is a front-to-back demonstration of time-triggered design.
Advisor: Santosh K. Pitl
Extending the Decision-Making Capabilities in Remanufacturing Service Contracts by Using Symbiotic Simulation
Remanufacturing is a critical enabler of a resource efficient manufacturing industry that has long been associated with high value products. Over time, the commercial relationship between customers and service providers has been made through the fulfilment of rights and obligations under remanufacturing service contracts. Nonetheless, financial analysis to evaluate the contract terms and conditions are becoming increasingly difficult to conduct due to complex decision problems inherent in remanufacturing systems. In order to achieve better and safer decision-making to shape the business strategies, remanufacturers often employ computer-based simulation tools to assess contractual obligations and customers’ needs. This paper discusses the roles of a symbiotic simulation system (SSS) in supporting decision-making in remanufacturing systems. An industrial case study of power transformer remanufacturing illustrates how SSS can support contract remanufacturers in managing service contracts planning and execution. By linking the simulation model to the physical system, it has been demonstrated that the capabilities of the remanufacturers to make critical decisions throughout the entire service contract period can be extended
Recommended from our members
An evaluation of load sharing algorithms for heterogeneous distributed systems
Distributed systems offer the ability to execute a job at other nodes than the originating one. Load sharing algorithms use this ability to distribute work around the system in order to achieve greater efficiency. This is reflected in substantially reduced response times. In the majority of studies the systems on which load sharing has been evaluated have been homogeneous in nature. This thesis considers load sharing in heterogeneous systems, in which the heterogeneity is exhibited in the processing power of the constituent nodes.
Existing algorithms are evaluated and improved ones proposed. Most of the performance analysis is done through simulation. A model of diskless workstations communicating and transferring jobs by Remote Procedure Call is used. All assumptions about the overheads of inter-node communication are based upon measurements made on the university networks.
The comparison of algorithms identifies those characteristics that offer improved performance in heterogeneous systems. The level of system information required for transfer is investigated and an optimum found. Judicious use of the collected information via algorithm design is shown to account for much of the improvement. However detailed examination of algorithm behaviour compared with that of a 'optimum' load sharing scenario reveals that there are occasions when full use of all the information available is not beneficial. Investigations are carried out on the most promising algorithms to assess their adaptability, scalability and stability under a variety of differing conditions. The standard definitions of load balancing and load sharing are shown not to apply when considering heterogeneous systems.
To validate the assumptions in the simulation model a load sharing scenario was implemented on a network of Sun workstations at the University. While the scope of the implementation was somewhat limited by lack of resources, it does demonstrate the relative ease with which the algorithms can be implemented without alteration of the operating system code or modification at the kernel level
Algorithms and architectures for the multirate additive synthesis of musical tones
In classical Additive Synthesis (AS), the output signal is the sum of a large number of independently controllable sinusoidal partials. The advantages of AS for music synthesis are well known as is the high computational cost. This thesis is concerned with the computational optimisation of AS by multirate DSP techniques. In note-based music synthesis, the expected bounds of the frequency trajectory of each partial in a finite lifecycle tone determine critical time-invariant partial-specific sample rates which are lower than the conventional rate (in excess of 40kHz) resulting in computational savings. Scheduling and interpolation (to suppress quantisation noise) for many sample rates is required, leading to the concept of Multirate Additive Synthesis (MAS) where these overheads are minimised by synthesis filterbanks which quantise the set of available sample rates. Alternative AS optimisations are also appraised. It is shown that a hierarchical interpretation of the QMF filterbank preserves AS generality and permits efficient context-specific adaptation of computation to required note dynamics. Practical QMF implementation and the modifications necessary for MAS are discussed. QMF transition widths can be logically excluded from the MAS paradigm, at a cost. Therefore a novel filterbank is evaluated where transition widths are physically excluded. Benchmarking of a hypothetical orchestral synthesis application provides a tentative quantitative analysis of the performance improvement of MAS over AS. The mapping of MAS into VLSI is opened by a review of sine computation techniques. Then the functional specification and high-level design of a conceptual MAS Coprocessor (MASC) is developed which functions with high autonomy in a loosely-coupled master- slave configuration with a Host CPU which executes filterbanks in software. Standard hardware optimisation techniques are used, such as pipelining, based upon the principle of an application-specific memory hierarchy which maximises MASC throughput
Performance measurement and evaluation of time-shared operating systems
Time-shared, virtual memory systems
are very complex and changes in their performance may
be caused by many factors - by variations in the
workload as well as changes in system configuration.
The evaluation of these systems can thus best be
carried out by linking results obtained from a
planned programme of measurements, taken on the
system, to some model of it. Such a programme of
measurements is best carried out under conditions in
which all the parameters likely to affect the system's
performance are reproducible, and under the control of
the experimenter. In order that this be possible the
workload used must be simulated and presented to the
target system through some form of automatic
workload driver. A case study of such a methodology
is presented in which the system (in this case the
Edinburgh Multi-Access System) is monitored during a
controlled experiment (designed and analysed using
standard techniques in common use in many other branches
of experimental science) and the results so obtained
used to calibrate and validate a simple simulation
model of the system. This model is then used in
further investigation of the effect of certain system parameters upon the system performance. The
factors covered by this exercise include the effect
of varying: main memory size, process loading
algorithm and secondary memory characteristics
- …