23 research outputs found

    A model-based approach for automatic recovery from memory leaks in enterprise applications

    Get PDF
    Large-scale distributed computing systems such as data centers are hosted on heterogeneous and networked servers that execute in a dynamic and uncertain operating environment, caused by factors such as time-varying user workload and various failures. Therefore, achieving stringent quality-of-service goals is a challenging task, requiring a comprehensive approach to performance control, fault diagnosis, and failure recovery. This work presents a model-based approach for fault management, which integrates limited lookahead control (LLC), diagnosis, and fault-tolerance concepts that: (1) enables systems to adapt to environment variations, (2) maintains the availability and reliability of the system, (3) facilitates system recovery from failures. We focused on memory leak errors in this thesis. A characterization function is designed to detect memory leaks. Then, a LLC is applied to enable the computing system to adapt efficiently to variations in the workload, and to enable the system recover from memory leaks and maintain functionality

    Controlling speculative execution through a virtually ordered memory system

    Get PDF
    Processors which extract parallelism through speculative execution must be able to identify when mis-speculation has occurred. The three places where mis-speculation can occur are register accesses, control flow prediction and memory accesses. Controlling register and control flow speculation has been well studied, but no scalable techniques for identifying memory dependence violations have been identified. Since speculative execution occurs out of order this requires tracking the causal order, as well as the addresses of memory accesses. This thesis uses simulations to investigate tracking the causal order of memory accesses using explicit tags known as virtual timestamps, a distributed and scalable method. Realizable virtual timestamps are necessarily restricted in length and it is demonstrated that naive allocation schemes seriously constrain execution by inefficiently allocating virtual timestamps. Efficiently allocating virtual timestamps requires analysis of the number required by each section of code. Basic statically and dynamically evaluated analysis methods are established to avoid virtual timestamp allocation becoming a resource bottleneck. The same analysis is also used to efficiently allocate state saving resources in a fixed hardware order. The hardware order provides an alternative way of maintaining the causal order using a simple hardware organization. The ability to predict the resources required by regions of code is used as a way of selecting instructions to execute speculatively. This enables resources to be allocated efficiently and is shown to allow large amounts of parallelism to be extracted. It also promotes the effectiveness of speculative execution by issuing less instructions that will ultimately be rolled back. Using a hierarchy of hardware ordering modules, themselves ordered by explicit virtual timestamps, a scalable ordering system is proposed. This hierarchy forms the basis of a twisted memory system, a multiple version memory system capable of identifying speculative memory dependence violations. The preliminary investigations presented here show that twisted memory has the potential to support aggressive speculative parallel execution. Particular attention is paid to memory bandwidth requirements

    Choreographing the extended agent : performance graphics for dance theater

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (v. 2, leaves 448-458).The marriage of dance and interactive image has been a persistent dream over the past decades, but reality has fallen far short of potential for both technical and conceptual reasons. This thesis proposes a new approach to the problem and lays out the theoretical, technical and aesthetic framework for the innovative art form of digitally augmented human movement. I will use as example works a series of installations, digital projections and compositions each of which contains a choreographic component - either through collaboration with a choreographer directly or by the creation of artworks that automatically organize and understand purely virtual movement. These works lead up to two unprecedented collaborations with two of the greatest choreographers working today; new pieces that combine dance and interactive projected light using real-time motion capture live on stage. The existing field of"dance technology" is one with many problems. This is a domain with many practitioners, few techniques and almost no theory; a field that is generating "experimental" productions with every passing week, has literally hundreds of citable pieces and no canonical works; a field that is oddly disconnected from modern dance's history, pulled between the practical realities of the body and those of computer art, and has no influence on the prevailing digital art paradigms that it consumes.(cont.) This thesis will seek to address each of these problems: by providing techniques and a basis for "practical theory"; by building artworks with resources and people that have never previously been brought together, in theaters and in front of audiences previously inaccessible to the field; and by proving through demonstration that a profitable and important dialogue between digital art and the pioneers of modern dance can in fact occur. The methodological perspective of this thesis is that of biologically inspired, agent-based artificial intelligence, taken to a high degree of technical depth. The representations, algorithms and techniques behind such agent architectures are extended and pushed into new territory for both interactive art and artificial intelligence. In particular, this thesis ill focus on the control structures and the rendering of the extended agents' bodies, the tools for creating complex agent-based artworks in intense collaborative situations, and the creation of agent structures that can span live image and interactive sound production. Each of these parts becomes an element of what it means to "choreograph" an extended agent for live performance.Marc Downie.Ph.D

    Join Point Designation Diagrams: Eine grafische Darstellungsweise für den Entwurf von Join Point Selektionen in der aspektorientierten Softwareentwicklung

    Get PDF
    Sharing knowledge about program specifications is a crucial task in collaborative software development. Developers need to be able to properly assess the objectives of the program specification in order to adequately deploy or evolve a piece of program. The specification of join point selections (also known as "pointcuts") in Aspect-Oriented Software Development (AOSD) is a piece of a program which frequently tends to grow quite complex, in particular if the join point selections involve selection constraints on the dynamic execution history of the program. In that case, readers of the pointcut specification frequently find themselves confronted with considerable comprehension problems because they need to inspect and realize an intricate and fragmented program specification in order to reconstruct the true objectives of the join point selection. This thesis presents Join Point Designation Diagrams (JPDDs) as a possible solution to the problem. JPDDs are a visual notation that provides an extensive set of join point selection means which are consolidated from a variety of contemporary aspect-oriented programming languages. JPDDs are capable of highlighting different join point selection constraints depending on the conceptual view on program execution which underlies the join point selection. With the help of these means, JPDDs are capable of representing complex join point selections on the dynamic execution of a program in a succinct and concise manner. JPDDs may be used by software developers of different aspect-oriented programming languages to represent their join point selections for the sake of an improved comprehensibility of the join point selections and – thus – for the sake of an easier communication between software developers. This thesis gives empirical evidence that JPDDs indeed facilitate the comprehensibility of join point selections. To do so, it conducts a controlled experiment which compares JPDDs to equivalent pointcut implementations in an aspect-oriented programming language. The experiment shows that JPDDs have a clear benefit over their codified counterparts in most of the case, while only in few cases no such benefit could be measured.Die Kommunikation und der Wissensaustausch zwischen Softwareentwicklern über Programmspezifikationen ist eine essentielle Notwendigkeit in großen Softwareprojekten. Softwareentwickler müssen den Sinn und Zweck eines (Teil)Programms richtig erfassen, bevor sie in der Lage sind, das (Teil)Programm richtig einzubinden bzw. weiterzuentwickeln. Join Point Selektionen in der aspektorientierten Softwareentwicklung (auch "Pointcuts" genannt) sind ein Beispiel für Programmspezifikationen, die schnell komplex werden – insbesondere dann, wenn sie Selektionsbedingungen enthalten, die sich auf die Laufzeitvergangenheit eines Programms beziehen. Die Spezifikationen solcher Join Point Selektionen sind oft schwer zu verstehen, da Softwareentwickler eine Vielzahl kleiner Spezifikationsfragmente und ihre Abhängigkeiten analysieren müssen, bevor sie den Sinn und Zweck einer Join Point Selektion erfassen können. Diese Dissertation präsentiert Join Point Designation Diagrams (JPDDs) als eine Lösung für das Problem. JPDDs sind eine grafische Notation, die eine Vielzahl von Selektionstechniken aus aktuellen aspektorientierten Programmiersprachen vereinigt. JPDDs ermöglichen es, unter Berücksichtigung gängiger konzeptioneller Sichten auf die Programmausführung relevante Selektionsbedingungen besonders hervorzuheben. Dabei stellen JPDDs Bedingungen, die sich auf die Laufzeitvergangenheit eines Programms beziehen, unzersplittert dar. JPDDs können von Softwareentwicklern unterschiedlicher Programmiersprachen benutzt werden, um ihre Join Point Selektionen zu spezifizieren und so das Verständnis der Join Point Selektionen – und damit die Kommunikation und den Wissensaustausch zwischen Softwareentwicklern – zu vereinfachen. Diese Dissertation weist mit Hilfe eines kontrollierten empirischen Experiments nach, dass JPDDs tatsächlich in der Lage sind, das Verständnis von Join Point Selektionen zu vereinfachen. Dazu werden JPDDs mit äquivalenten Pointcut-Spezifikationen in einer aspektorientierten Programmiersprache verglichen. Das Experiment zeigt, dass JPDDs in der Mehrzahl der Fälle vorteilhaft sind. Nur in wenigen Ausnahmen konnte kein Vorteil im Vergleich zu der aspektorientierten Programmiersprache beobachtet werden

    Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer components.

    Get PDF
    The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces

    Growing Brains in Silico: Integrating Biochemistry, Genetics and Neural Activity in Neurodevelopmental Simulations

    Get PDF
    Biologists\u27 understanding of the roles of genetics, biochemistry and activity in neural function is rapidly improving. All three interact in complex ways during development, recovery from injury and in learning and memory. The software system NeuroGene was written to simulate neurodevelopmental processes. Simulated neurons develop within a 3D environment. Protein diffusion, decay and receptor-ligand binding are simulated. Simulations are controlled by genetic information encoded using a novel programming language mimicking the control mechanisms of biological genes. Simulated genes may be regulated by protein concentrations, neural activity and cellular morphology. Genes control protein production, changes in cell morphology and neural properties, including learning. We successfully simulate the formation of topographic projection from the retina to the tectum. We propose a novel model of topography based on simulated growth cones. We also simulate activitydependent refinement, through which diffuse connections are modified until each retinal cell connects to only a few target cells

    Running parallel applications on a heterogeneous environment with accessible development practices and automatic scalability

    Get PDF
    Grid computing makes it possible to gather large quantities of resources to work on a problem. In order to exploit this potential, a framework that presents the resources to the user programmer in a form that maintains productivity is necessary. The framework must not only provide accessible development, but it must make efficient use of the resources. The Seeds framework is proposed. It uses the current Grid and distributed computing middleware to provide a parallel programming environment to a wider community of programmers. The framework was used to investigate the feasibility of scaling skeleton/pattern parallel programming into Grid computing. The research accomplished two goals: it made parallel programming on the Grid more accessible to domain­specific programmers, and it made parallel programs scale on a heterogeneous resource environ­ ment. Programming is made easier to the programmer by using skeleton and pat­ tern­based programming approaches that effectively isolate the program from the envi­ ronment. To extend the pattern approach, the pattern adder operator is proposed, imple­ mented and tested. The results show the pattern operator can reduce the number of lines of code when compared with an MPJ­Express implementation for a stencil algorithm while having an overhead of at most ten microseconds per iteration. The research in scal­ ability involved adapting existing load­balancing techniques to skeletons and patterns re­ quiring little additional configuration on the part of the programmer. The hierarchical de­ pendency concept is proposed as well, which uses a streamed data flow programming model. The concept introduces data flow computation hibernation and dependencies that can split to accommodate additional processors. The results from implementing skeleton/patterns on hierarchical dependencies show an 18.23% increase in code is neces­ sary to enable automatic scalability. The concept can increase speedup depending on the algorithm and grain size

    Modelling grid architecture.

    Get PDF
    This thesis evaluates software engineering methods, especially event modelling of distributed systems architecture, by applying them to specific data-grid projects. Other methods evaluated include requirements' analysis, formal architectural definition and discrete event simulation. A novel technique for matching architectural styles to requirements is introduced. Data-grids are a new class of networked information systems arising from e-science, itself an emergent method for computer-based collaborative research in the physical sciences. The tools used in general grid systems, which federate distributed resources, are reviewed, showing that they do not clearly guide architecture. The data-grid projects, which join heterogeneous data stores specifically, put required qualities at risk. Such risk of failure is mitigated in the EGSO and AstroGrid solar physics data-grid projects' designs by modelling. Design errors are trapped by rapidly encoding and evaluating informal concepts, architecture, component interaction and objects. The success of software engineering modelling techniques depends on the models' accuracy, ability to demonstrate the required properties, and clarity (so project managers and developers can act on findings). The novel formal event modelling language chosen, FSP, meets these criteria at the diverse early lifecycle stages (unlike some techniques trialled). Models permit very early testing, finding hidden complexity, gaps in designed protocols and risks of unreliability. However, simulation is shown to be more suitable for evaluating qualities like scalability, which emerge when there are many component instances. Design patterns (which may be reused in other data-grids to resolve commonly encountered challenges) are exposed in these models. A method for generating useful models rapidly, introducing the strength of iterative lifecycles to sequential projects, also arises. Despite reported resistance to innovation in industry, the software engineering techniques demonstrated may benefit commercial information systems too

    Database-driven hydraulic simulation of canal irrigation networks using object-oriented high-resolution methods

    Get PDF
    Canal hydraulic models can be used to understand the hydraulic behaviour of large and complex irrigation networks at low cost. A number of computational hydraulic models were developed and tested in the early 1970s and late 80s. Most were developed using finite difference schemes and procedural programming languages. In spite of the importance of these models, little progress was made on improving the numerical algorithms behind them. Software development efforts were focused more on developing the user interface rather than the core algorithm. This research develops a database-driven, object-oriented hydraulic simulation model for canal irrigation networks using modern high-resolution shock capturing techniques that are capable of handling variety of flow situations which includes trans-critical flow, shock propagation, flows through gated structures and channel networks. The technology platforms were carefully selected by taking into account a multi-user support and possible migration of the new software to a web-based one which integrates a Java-based object-oriented model with a relational database management system that is used to store network configuration and simulation parameters. The developed software is tested using a benchmark test suite formulated jointly by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency (EA). A total of eight tests (seven of them adapted from the DEFRAjEA benchmark suite) were run and results compiled. The developed software has outperformed ISIS, REC-RAS and MIKE 11 in three of the benchmark tests and equally well for the other four. The outcome of this research is therefore a new category in hydraulic simulation software that uses modern shock-capturing methods fully integrated with a configurational relational database that has been fully evaluated and tested.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    DSpace 1.8 manual

    Get PDF
    corecore