554 research outputs found

    An overview of Mirjam and WeaveC

    Get PDF
    In this chapter, we elaborate on the design of an industrial-strength aspectoriented programming language and weaver for large-scale software development. First, we present an analysis on the requirements of a general purpose aspect-oriented language that can handle crosscutting concerns in ASML software. We also outline a strategy on working with aspects in large-scale software development processes. In our design, we both re-use existing aspect-oriented language abstractions and propose new ones to address the issues that we identified in our analysis. The quality of the code ensured by the realized language and weaver has a positive impact both on maintenance effort and lead-time in the first line software development process. As evidence, we present a short evaluation of the language and weaver as applied today in the software development process of ASML

    Architectural run-time models for performance and privacy analysis in dynamic cloud applications

    Get PDF

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system

    PhD

    Get PDF
    dissertationThe mammalian retina is comprised of 55-60 cell types mediating transduction of photic information through visual preprocessing channels. These cell types fall into six major cell superclasses including photoreceptors, horizontal, amacrine, Muller and ganglion cells. Through computational molecular phenotyping, using amino acids as discriminands, this dissertation shows that the major cellular superclasses of the murine retina are subdivisible into the following natural classes; 1 retinal pigment epithelium class, 2 photoreceptor, 2 bipolar cell, 1 horizontal cell, 15 amacrine cell, 1 Muller cell, and 7 ganglion cell classes. Retinal degenerative diseases like retinitis pigmentosa result in loss of photoreceptors, which constitutes deafferentation of the neural retina. This deafferentation, when complete, is followed by retinal remodeling, which is the common fate of all retinal degenerations that trigger photoreceptor loss. The same strategy used to visualize cell classes in wild type murine retina was applied to examples of retinal degenerative disease in human tissues and naturally and genetically engineered models, examining all cell types in 17 human cases of retinitis pigmentosa (RP) and 85 cases of rodent retinal degenerations encompassing 13 different genetic models. Computational molecular phenotyping concurrently visualized glial transformations, neuronal translocations, and the emergence of novel synaptic complexes, achievements not possible with any other method. The fusion of phenotyping and anatomy at the ultrastructure level also enabled the modeling of synaptic connections, illustrating that the degenerating retina produces new synapses with vigor with the possibility that this phenomenon might be exploited to rescue vision. However, this circuitry is likely corruptive of visual processing and reflects, we believe, attempts by neurons to find synaptic excitation, demonstrating that even minor rewiring seriously corrupts signal processing in retinal pathways leaving many current approaches to bionic and biological retinal rescue unsustainable. The ultimate conclusion is that the sequelae of retinal degenerative disease are far more complex than previously believed, and schemes to rescue vision via bionic implants or stem/engineered cells are based on presumed beliefs in preservation of normal wiring and cell population patterning after photoreceptor death. Those beliefs are incorrect: retinal neurons die, migrate, and create new circuitries. Vision rescue strategies will need to be refined

    PhD

    Get PDF
    dissertationThe mammalian retina is comprised of 55-60 cell types mediating transduction of photic information through visual preprocessing channels. These cell types fall into six major cell superclasses including photoreceptors, horizontal, amacrine, Muller and ganglion cells. Through computational molecular phenotyping, using amino acids as discriminands, this dissertation shows that the major cellular superclasses of the murine retina are subdivisible into the following natural classes; 1 retinal pigment epithelium class, 2 photoreceptor, 2 bipolar cell, 1 horizontal cell, 15 amacrine cell, 1 Muller cell, and 7 ganglion cell classes. Retinal degenerative diseases like retinitis pigmentosa result in loss of photoreceptors, which constitutes deafferentation of the neural retina. This deafferentation, when complete, is followed by retinal remodeling, which is the common fate of all retinal degenerations that trigger photoreceptor loss. The same strategy used to visualize cell classes in wild type murine retina was applied to examples of retinal degenerative disease in human tissues and naturally and genetically engineered models, examining all cell types in 17 human cases of retinitis pigmentosa (RP) and 85 cases of rodent retinal degenerations encompassing 13 different genetic models. Computational molecular phenotyping concurrently visualized glial transformations, neuronal translocations, and the emergence of novel synaptic complexes, achievements not possible with any other method. The fusion of phenotyping and anatomy at the ultrastructure level also enabled the modeling of synaptic connections, illustrating that the degenerating retina produces new synapses with vigor with the possibility that this phenomenon might be exploited to rescue vision. However, this circuitry is likely corruptive of visual processing and reflects, we believe, attempts by neurons to find synaptic excitation, demonstrating that even minor rewiring seriously corrupts signal processing in retinal pathways leaving many current approaches to bionic and biological retinal rescue unsustainable. The ultimate conclusion is that the sequelae of retinal degenerative disease are far more complex than previously believed, and schemes to rescue vision via bionic implants or stem/engineered cells are based on presumed beliefs in preservation of normal wiring and cell population patterning after photoreceptor death. Those beliefs are incorrect: retinal neurons die, migrate, and create new circuitries. Vision rescue strategies will need to be refined

    Doctor of Philosophy

    Get PDF
    dissertationZL is a C++-compatible language in which high-level constructs, such as classes, are defined using macros over a C-like core language. This approach is similar in spirit to Scheme and makes many parts of the language easily customizable. For example, since the class construct can be defined using macros, a programmer can have complete control over the memory layout of objects. Using this capability, a programmer can mitigate certain problems in software evolution such as fragile ABIs (Application Binary Interfaces) due to software changes and incompatible ABIs due to compiler changes. ZL's parser and macro expander is similar to that of Scheme. Unlike Scheme, however, ZL must deal with C's richer syntax. Specifically, support for context;-sensitive parsing and multiple syntactic categories (expressions, statements, types, etc.) leads to novel strategies for parsing and macro expansion. In this dissertation we describe ZL's approach to parsing and macros. We demonstrate how to use ZL to avoid problems with ABI instability through techniques such as fixing the size of class instances and controlling the layout of virtual method dispatch tables. We also demonstrate how to avoid problems with ABI incompatibility by implementing another compiler's ABI. Future work includes a more complete implementation of C++ and elevating the approach so that it is driven by a declarative ABI specification language

    Performance modelling and the representation of large scale distributed system functions

    Get PDF
    This thesis presents a resource based approach to model generation for performance characterization and correctness checking of large scale telecommunications networks. A notion called the timed automaton is proposed and then developed to encapsulate behaviours of networking equipment, system control policies and non-deterministic user behaviours. The states of pooled network resources and the behaviours of resource consumers are represented as continually varying geometric patterns; these patterns form part of the data operated upon by the timed automata. Such a representation technique allows for great flexibility regarding the level of abstraction that can be chosen in the modelling of telecommunications systems. None the less, the notion of system functions is proposed to serve as a constraining framework for specifying bounded behaviours and features of telecommunications systems. Operational concepts are developed for the timed automata; these concepts are based on limit preserving relations. Relations over system states represent the evolution of system properties observable at various locations within the network under study. The declarative nature of such permutative state relations provides a direct framework for generating highly expressive models suitable for carrying out optimization experiments. The usefulness of the developed procedure is demonstrated by tackling a large scale case study, in particular the problem of congestion avoidance in networks; it is shown that there can be global coupling among local behaviours within a telecommunications network. The uncovering of such a phenomenon through a function oriented simulation is a contribution to the area of network modelling. The direct and faithful way of deriving performance metrics for loss in networks from resource utilization patterns is also a new contribution to the work area

    An Approach to Designing Clusters for Large Data Processing

    Get PDF
    Cloud computing is increasingly being adopted due to its cost savings and abilities to scale. As data continues to grow rapidly, an increasing amount of institutions are adopting non standard SQL clusters to address the storage and processing demands of large data. However, evaluating and modelling non SQL clusters presents many challenges. In order to address some of these challenges, this thesis proposes a methodology for designing and modelling large scale processing configurations that respond to the end user requirements. Firstly, goals are established for the big data cluster. In this thesis, we use performance and cost as our goals. Secondly, the data is transformed from relational data schema to an appropriate HBase schema. In the third step, we iteratively deploy different clusters. We then model the clusters and evaluate different topologies (size of instances, number of instances, number of clusters, etc.). We use HBase as the large data processing cluster and we evaluate our methodology on traffic data from a large city and on a distributed community cloud infrastructure
    corecore