1,552 research outputs found
Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system
Biology has taken strong steps towards becoming a computer science aiming at
reprogramming nature after the realisation that nature herself has reprogrammed
organisms by harnessing the power of natural selection and the digital
prescriptive nature of replicating DNA. Here we further unpack ideas related to
computability, algorithmic information theory and software engineering, in the
context of the extent to which biology can be (re)programmed, and with how we
may go about doing so in a more systematic way with all the tools and concepts
offered by theoretical computer science in a translation exercise from
computing to molecular biology and back. These concepts provide a means to a
hierarchical organization thereby blurring previously clear-cut lines between
concepts like matter and life, or between tumour types that are otherwise taken
as different and may not have however a different cause. This does not diminish
the properties of life or make its components and functions less interesting.
On the contrary, this approach makes for a more encompassing and integrated
view of nature, one that subsumes observer and observed within the same system,
and can generate new perspectives and tools with which to view complex diseases
like cancer, approaching them afresh from a software-engineering viewpoint that
casts evolution in the role of programmer, cells as computing machines, DNA and
genes as instructions and computer programs, viruses as hacking devices, the
immune system as a software debugging tool, and diseases as an
information-theoretic battlefield where all these forces deploy. We show how
information theory and algorithmic programming may explain fundamental
mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and
Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George
Ellis (eds.), Cambridge University Pres
Genetic network programming with reinforcement learning and optimal search component : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand
This thesis presents ways of improving the genetic composition, structure and learning
strategies for a graph-based evolutionary algorithm, called Genetic Networking Programming
with Reinforcement Learning (GNP-RL), particularly when working with multi-agent and
dynamic environments. GNP-RL is an improvement over Genetic Programming, allowing for
the concise representation of solutions in terms of a networked graph structure and uses RL to
further refine the graph solutions. This work has improved GNP-RL by combining three new
techniques: Firstly, it has added a reward and punishment scheme as part of its learning strategy
that supports constraint conformance, allowing for a more adaptive training of the agent, so
that it can learn how to avoid unwanted situations more effectively. Secondly, an optimal
search algorithm has been combined in the GNP-RL core to get an accurate analysis of the
exploratory environment. Thirdly, a task prioritization technique has been added to the agent’s
learning by giving promotional rewards, so they are trained on how to take priority into account
when performing tasks. In this thesis, we applied the improved algorithm to the Tile World
benchmarking testbed, which is considered as one of the standard complex problems in this
domain, having only a sparse training set. Our experiment results show that the proposed
algorithm is superior than the best existing variant of the GNP-RL algorithm [1]. We have
achieved 86.66% test accuracy on the standard benchmarking dataset [2]. In addition, we have
created another benchmarking dataset, similar in complexity to the one proposed in [1], to test
the proposed algorithms further, where it achieved a test accuracy of 96.66%; that is 33.66%
more accurate
Analysis and Test of the Effects of Single Event Upsets Affecting the Configuration Memory of SRAM-based FPGAs
SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. These faults have particularly adverse effects on SRAM-based FPGA systems because not only can they temporarily affect
the behaviour of the system by changing the contents of flip-flops or memories, but they can also permanently change the functionality implemented by the system itself, by changing the content of the configuration memory. Designing safety-critical applications requires accurate methodologies to evaluate the system’s sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. To this purpose test patterns should be generated during the design process, and then applied to the inputs of the system during its operation. In this thesis we propose a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. The main feature of these tools is that they implement a model of SEUs affecting the configuration bits controlling the logic and routing resources of an FPGA device that has been demonstrated to be much more accurate than the classical stuck-at and open/short models, that are
commonly used in the analysis of faults in digital devices. By keeping this accurate
fault model into account, the proposed tools are more accurate than similar academic and commercial tools today available for the analysis of faults in digital circuits, that do not take into account the features of the FPGA technology..
In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs, a simulator of SEUs affecting the configuration memory of an SRAM-based FPGA system
for the early assessment of the sensitivity to SEUs; (ii) UA2TPG: Untestability Analyzer
and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs, a static analysis tool for the identification of the untestable SEUs and for the automatic generation of test patterns for in-service testing of the 100% of the testable SEUs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs, a Genetic Algorithm-based Environment for the generation of an optimized set of test patterns for in-service testing of SEUs. The proposed tools have been applied to some circuits from the ITC’99 benchmark. The results obtained from these experiments have been compared with results
obtained by similar experiments in which we considered the stuck-at fault model, instead
of the more accurate model for SEUs. From the comparison of these experiments we have been able to verify that the proposed software tools are actually more accurate than similar tools today available. In particular the comparison between results obtained using ASSESS with those obtained by fault injection has shown that the proposed fault simulator has an average error of 0:1% and a maximum error of 0:5%, while using a stuck-at fault simulator the average error with respect of the fault injection experiment has been 15:1% with a maximum error of 56:2%. Similarly the comparison between the results obtained using UA2TPG for the accurate SEU model, with the results obtained for stuck-at faults has shown an average difference of untestability of 7:9% with a maximum of 37:4%. Finally the comparison between
fault coverages obtained by test patterns generated for the accurate model of SEUs and the fault coverages obtained by test pattern designed for stuck-at faults, shows that the former detect the 100% of the testable faults, while the latter reach an average fault coverage of 78:9%, with a minimum of 54% and a maximum of 93:16%
Intelligent design of manufacturing systems.
The design of a manufacturing system is normally performed in two distinct stages, i.e.
steady state design and dynamic state design. Within each system design stage a variety of
decisions need to be made of which essential ones are the determination of the product
range to be manufactured, the layout of equipment on the shopfloor, allocation of work
tasks to workstations, planning of aggregate capacity requirements and determining the lot
sizes to be processed.
This research work has examined the individual problem areas listed above in order to
identify the efficiency of current solution techniques and to determine the problems
experienced with their use. It has been identified that for each design problem. although
there are an assortment of solution techniques available, the majority of these techniques are
unable to generate optimal or near optimal solutions to problems of a practical size. In
addition, a variety of limitations have been identified that restrict the use of existing
techniques. For example, existing methods are limited with respect to the external
conditions over which they are applicable and/or cannot enable qualitative or subjective
judgements of experienced personnel to influence solution outcomes.
An investigation of optimization techniques has been carried out which indicated that
genetic algorithms offer great potential in solving the variety of problem areas involved in
manufacturing systems design. This research has, therefore, concentrated on testing the use
of genetic algorithms to make individual manufacturing design decisions. In particular, the
ability of genetic algorithms to generate better solutions than existing techniques has been
examined and their ability to overcome the range of limitations that exist with current
solution techniques.
IIFor each problem area, a typical solution has been coded in terms of a genetic algorithm
structure, a suitable objective function constructed and experiments performed to identify
the most suitable operators and operator parameter values to use. The best solution
generated using these parameters has then been compared with the solution derived using a
traditional solution technique. In addition, from the range of experiments undertaken the
underlying relationships have been identified between problem characteristics and optimality
of operator types and parameter values.
The results of the research have identified that genetic algorithms could provide an
improved solution technique for all manufacturing design decision areas investigated. In
most areas genetic algorithms identified lower cost solutions and overcame many of the
limitations of existing techniques
Physical parameter-aware Networks-on-Chip design
PhD ThesisNetworks-on-Chip (NoCs) have been proposed as a scalable, reliable
and power-efficient communication fabric for chip multiprocessors
(CMPs) and multiprocessor systems-on-chip (MPSoCs). NoCs determine
both the performance and the reliability of such systems, with a
significant power demand that is expected to increase due to developments
in both technology and architecture. In terms of architecture, an
important trend in many-core systems architecture is to increase the
number of cores on a chip while reducing their individual complexity.
This trend increases communication power relative to computation
power. Moreover, technology-wise, power-hungry wires are dominating
logic as power consumers as technology scales down. For these
reasons, the design of future very large scale integration (VLSI) systems
is moving from being computation-centric to communication-centric.
On the other hand, chip’s physical parameters integrity, especially
power and thermal integrity, is crucial for reliable VLSI systems. However,
guaranteeing this integrity is becoming increasingly difficult with
the higher scale of integration due to increased power density and operating
frequencies that result in continuously increasing temperature
and voltage drops in the chip. This is a challenge that may prevent
further shrinking of devices. Thus, tackling the challenge of power
and thermal integrity of future many-core systems at only one level
of abstraction, the chip and package design for example, is no longer
sufficient to ensure the integrity of physical parameters. New designtime
and run-time strategies may need to work together at different
levels of abstraction, such as package, application, network, to provide
the required physical parameter integrity for these large systems. This
necessitates strategies that work at the level of the on-chip network
with its rising power budget.
This thesis proposes models, techniques and architectures to improve
power and thermal integrity of Network-on-Chip (NoC)-based
many-core systems. The thesis is composed of two major parts: i)
minimization and modelling of power supply variations to improve
power integrity; and ii) dynamic thermal adaptation to improve thermal
integrity. This thesis makes four major contributions. The first is
a computational model of on-chip power supply variations in NoCs.
The proposed model embeds a power delivery model, an NoC activity
simulator and a power model. The model is verified with SPICE simulation
and employed to analyse power supply variations in synthetic
and real NoC workloads. Novel observations regarding power supply
noise correlation with different traffic patterns and routing algorithms
are found. The second is a new application mapping strategy aiming
vii
to minimize power supply noise in NoCs. This is achieved by defining
a new metric, switching activity density, and employing a force-based
objective function that results in minimizing switching density. Significant
reductions in power supply noise (PSN) are achieved with a low
energy penalty. This reduction in PSN also results in a better link timing
accuracy. The third contribution is a new dynamic thermal-adaptive
routing strategy to effectively diffuse heat from the NoC-based threedimensional
(3D) CMPs, using a dynamic programming (DP)-based distributed
control architecture. Moreover, a new approach for efficient extension
of two-dimensional (2D) partially-adaptive routing algorithms
to 3D is presented. This approach improves three-dimensional networkon-
chip (3D NoC) routing adaptivity while ensuring deadlock-freeness.
Finally, the proposed thermal-adaptive routing is implemented in
field-programmable gate array (FPGA), and implementation challenges,
for both thermal sensing and the dynamic control architecture are addressed.
The proposed routing implementation is evaluated in terms
of both functionality and performance.
The methodologies and architectures proposed in this thesis open a
new direction for improving the power and thermal integrity of future
NoC-based 2D and 3D many-core architectures
Reusability in manufacturing, supported by value net and patterns approaches
The concept of manufacturing and the need or desire to create artefacts or products is
very, very old, yet it is still an essential component of all modem economies. Indeed,
manufacturing is one of the few ways that wealth is created. The creation or
identification of good quality, sustainable product designs is fundamental to the
success of any manufacturing enterprise. Increasingly, there is also a requirement for
the manufacturing system which will be used to manufacture the product, to be
designed (or redesigned) in parallel with the product design. Many different types of
manufacturing knowledge and information will contribute to these designs. A key
question therefore for manufacturing companies to address is how to make the very
best use of their existing, valuable, knowledge resources.
[…] The research reported in this thesis examines ways of reusing existing manufacturing
knowledge of many types, particularly in the area of manufacturing systems design.
The successes and failures of reported reuse programmes are examined, and lessons
learnt from their experiences. This research is therefore focused on identifying
solutions that address both technical and non-technical requirements simultaneously,
to determine ways to facilitate and increase the reuse of manufacturing knowledge in
manufacturing system design. [Continues.
- …