39 research outputs found

    Experience with statically-generated proxies for facilitating Java runtime specialisation

    Get PDF
    Issues pertaining to mechanisms which can be used to change the behaviour of Java classes at runtime are discussed. The proxy mechanism is compared to, and contrasted with other standard approaches to this problem. Some of the problems the proxy mechanism is subject to are expanded upon. The question of whether statically-developed proxies are a viable alternative to bytecode rewriting was investigated by means of the JavaCloak system, which uses statically-generated proxies to alter the runtime behaviour of externally-developed code. The issues addressed include ensuring the type safety, dealing with the self problem, object encapsulation, and issues of object identity and equality. Some performance figures are provided which demonstrate the load the JavaCloak proxy mechanism places on the system

    Middle-out domain-specific aspect languages and their application in agent-based modelling runtime inspection

    Get PDF
    Domain-Specific Aspect Languages (DSALs) are a valuable tool for separating cross-cutting concerns, particularly within fields with endemic cross-cutting practices. Agent-Based Modelling (ABM) runtime inspection, which cuts across the core concern of model development, serves as a prime example. Despite their usefulness, DSALs face multiple adoption issues: the literature regarding their development and use is incohesive, coupling to a weave target hinders re-use, and available tooling is immature compared to Domain-Specific Languages (DSLs). We believe these issues can be aided by furthering DSL middle-out techniques for DSALs.We first define the background of what a DSAL is and how they may be used, moving onto how we can use DSL techniques to further DSALs. We develop a middle-out semantic model approach for developing domain-level DSALs with transparent aspect orientation using adaptions of DSL techniques. We have implemented the approach for model-specific DSALs for the in-house framework Animaux, and as middleware-specific DSAL for agent messages in the JADE framework, which can be specialised to models using extension DSALs. We give illustrative result cases using our implementations to provide a base of the user development costs and performance of this approach.In conclusion, we believe the adoption of these technologies aids ABM applications and encourage future work in similar fields. This thesis has given a base philosophy toward DSLs, a novel approach for the development of middle-out DSALs and illustrative cases of this approach

    On the Security of Software Systems and Services

    Get PDF
    This work investigates new methods for facing the security issues and threats arising from the composition of software. This task has been carried out through the formal modelling of both the software composition scenarios and the security properties, i.e., policies, to be guaranteed. Our research moves across three different modalities of software composition which are of main interest for some of the most sensitive aspects of the modern information society. They are mobile applications, trust-based composition and service orchestration. Mobile applications are programs designed for being deployable on remote platforms. Basically, they are the main channel for the distribution and commercialisation of software for mobile devices, e.g., smart phones and tablets. Here we study the security threats that affect the application providers and the hosting platforms. In particular, we present a programming framework for the development of applications with a static and dynamic security support. Also, we implemented an enforcement mechanism for applying fine-grained security controls on the execution of possibly malicious applications. In addition to security, trust represents a pragmatic and intuitive way for managing the interactions among systems. Currently, trust is one of the main factors that human beings keep into account when deciding whether to accept a transaction or not. In our work we investigate the possibility of defining a fully integrated environment for security policies and trust including a runtime monitor. Finally, Service-Oriented Computing (SOC) is the leading technology for business applications distributed over a network. The security issues related to the service networks are many and multi-faceted. We mainly deal with the static verification of secure composition plans of web services. Moreover, we introduce the synthesis of dynamic security checks for protecting the services against illegal invocations

    Transparent and adaptive application partitioning using mobile objects

    Get PDF
    The dynamic nature and heterogeneity of modern execution environments such as mobile, ubiquitous, and grid computing, present major challenges for the development and efficient execution of the applications targeted for these environments. In particular, applications tailored to run in a specific environment will show different and most likely sub-optimal behaviour when executed on a different and/or dynamic environment. Consequently, there has been growing interests in the area of application adaptation which aims to enable applications to cope with the varying execution environments. Adaptive application partitioning, a specific form of non-functional adaptation involving distribution of mobile objects across multiple host machines, is of particular interest to this thesis due to the diversity of its uses. In this approach, certain runtime information (known as context) is used to allow an object-oriented application to adaptively (re)adjust the placement of its objects during its execution, for purposes such as improving application performance and reliability as well as balancing resource utilisation across machines. Promoting the adoption of such adaptation requires a process that requires minimal human involvement in both the execution and the development of the relevant application. These challenges establish the main goals and contributions of this work, which include: 1) Proposing an effective application partitioning solution via the adoption of a decentralised adaptation strategy known as local adaptation. 2) Enabling adaptive application partitioning which does not require human intervention, through automatic collection of required information/context. 3) Proposing a solution for transparently injecting the required adaptation functionality into regular object-oriented applications allowing significant reduction of the associated development cost/effort. The proposed solutions have been implemented in a Java-based adaptation framework called MobJeX. This implementation, which was used as a test bed for the empirical experiments undertaken in this study, can be used to facilitate future research relevant to this particular study

    The Continuum Architecture: Towards Enabling Chaotic Ubiquitous Computing

    Get PDF
    Interactions in the style of the ubiquitous computing paradigm are possible today, but only in handcrafted environments within one administrative and technological realm. This thesis describes an architecture (called Continuum), a design that realises the architecture, and a proof-of-concept implementation that brings ubiquitous computing to chaotic environments. Essentially, Continuum enables an ecology at the edge of the network, between users, competing service providers from overlapping administrative domains, competing internet service providers, content providers, and software developers that want to add value to the user experience. Continuum makes the ubiquitous computing functionality orthogonal to other application logic. Existing web applications are augmented for ubiquitous computing with functionality that is dynamically compiled and injected by a middleware proxy into the web pages requested by a web browser at the user?s mobile device. This enables adaptability to environment variability, manageability without user involvement, and expansibility without changes to the mobile. The middleware manipulates self-contained software units with precise functionality (called frames), which help the user interact with contextual services in conjunction with the data to which they are attached. The middleware and frame design explicitly incorporates the possibility of discrepancies between the assumptions of ubiquitous-computing software developers and field realities: multiple administrative domains, unavailable service, unavailable software, and missing contextual information. A framework for discovery and authorisation addresses the chaos inherent to the paradigm through the notion of role assertions acquired dynamically by the user. Each assertion represents service access credentials and contains bootstrapping points for service discovery on behalf of the holding user. A proof-of-concept prototype validates the design, and implements several frames that demonstrate general functionality, including driving discovery queries over multiple service discovery protocols and making equivalences between service types, across discovery protocols

    Semantic Service Description Framework for Efficient Service Discovery and Composition

    Get PDF
    Web services have been widely adopted as a new distributed system technology by industries in the areas of, enterprise application integration, business process management, and virtual organisation. However, lack of semantics in current Web services standards has been a major barrier in the further improvement of service discovery and composition. For the last decade, Semantic Web Services have become an important research topic to enrich the semantics of Web services. The key objective of Semantic Web Services is to achieve automatic/semi-automatic Web service discovery, invocation, and composition. There are several existing semantic Web service description frameworks, such as, OWL-S, WSDL-S, and WSMF. However, existing frameworks have several issues, such as insufficient service usage context information, precisely specified requirements needed to locate services, lacking information about inter-service relationships, and insufficient/incomplete information handling, make the process of service discovery and composition not as efficient as it should be. To address these problems, a context-based semantic service description framework is proposed in this thesis. This framework focuses on not only capabilities of Web services, but also the usage context information of Web services, which we consider as an important factor in efficient service discovery and composition. Based on this framework, an enhanced service discovery mechanism is proposed. It gives service users more flexibility to search for services in more natural ways rather than only by technical specifications of required services. The service discovery mechanism also demonstrates how the features provided by the framework can facilitate the service discovery and composition processes. Together with the framework, a transformation method is provided to transform exiting service descriptions into the new framework based descriptions. The framework is evaluated through a scenario based analysis in comparison with OWL-S and a prototype based performance evaluation in terms of query response time, the precision and recall ratio, and system scalability

    A framework for adaptive monitoring and performance management of component-based enterprise applications

    Get PDF
    Most large-scale enterprise applications are currently built using component-based middleware platforms such as J2EE or .NET. Developers leverage enterprise services provided by such platforms to speed up development and increase the robustness of their applications. In addition, using a component-oriented development model brings benefits such as increased reusability and flexibility in integrating with third-party systems. In order to provide the required services, the application servers implementing the corresponding middleware specifications employ a complex run-time infrastructure that integrates with developer-written business logic. The resulting complexity of the execution environment in such systems makes it difficult for architects and developers to understand completely the implications of alternative design options over the resulting performance of the running system. They often make incorrect assumptions about the behaviour of the middleware, which may lead to design decisions that cause severe performance problems after the system has been deployed. This situation is aggravated by the fact that although application servers vary greatly in performance and capabilities, many advertise a similar set of features, making it difficult to choose the one that is the most appropriate for their task. The thesis presents a methodology and tool for approaching performance management in enterprise component-based systems. By leveraging the component platform infrastructure, the described solution can nonintrusively instrument running applications and extract performance statistics. The use of component meta-data for target analysis, together with standards-based implementation strategies, ensures the complete portability of the instrumentation solution across different application servers. Based on this instrumentation infrastructure, a complete performance management framework including modelling and performance prediction is proposed. Most instrumentation solutions exhibit static behaviour by targeting a specified set of components. For long running applications, a constant overhead profile is undesirable and typically, such a solution would only be used for the duration of a performance audit, sacrificing the benefits of constantly observing a production system in favour of a reduced performance impact. This is addressed in this thesis by proposing an adaptive approach to monitoring which uses execution models to target profiling operations dynamically on components that exhibit performance degradation; this ensures a negligible overhead when the target application performs as expected and a minimum impact when certain components under-perform. Experimental results obtained with the prototype tool demonstrate the feasibility of the approach in terms of induced overhead. The portable and extensible architecture yields a versatile and adaptive basic instrumentation facility for a variety of potential applications that need a flexible solution for monitoring long running enterprise applications

    Developing a global observer programming model for large-scale networks of autonomic systems

    Get PDF
    Computing and software intensive systems are now an inextricable part of modern work, life and entertainment fabric. This consequently has increased our reliance on their dependable operation. While much is known regarding software engineering practices of dependable software systems; the extreme scale, complexity and dynamics of modern software has pushed conventional software engineering tools and techniques to their acceptable limits. Consequently, over the last decade, this has accelerated research into non-conventional methods, many of which are inspired by social and/or biological systems model. Exemplar of which are the DARPA-funded Se1f-Regenerative-Systems (SRS) programme, and Autonomic Computing, where a closed-loop feedback control model is essential to delivering the advocated cognitive immunity and self-management capabilities. While much research work has been conducted on vanous aspects of SRS and autonomy, they are typically based on the assumptions that the structural model (organisation) of managed elements is static and exhaustive monitoring and feedback is computationally scalable. In addition, existing federated approaches to distributed computation and control, such as Multi-Agent-Systems fail to satisfactorily address how global control may be enacted upon the whole system and how an individual component may take on specified monitoring duties - although methods of interaction between federated individuals is well understood. Equally, organic-inspired computing looks to deal with event scale and complexity largely from a mining perspective, with observation concerns deferred to a suitably selective abstraction known as the "observation model". However, computing and mathematical science research, along with other fields has developed problem-specific approaches to help manage complexity; abstraction-based approaches can simplify structural organisation allowing the underlying meaning to be better understood. Statistical and graph-based approaches can both provide identifying features along with selectively reducing the size of a modelled structure by selecting specific areas that conform to certain topological criteria. This research studies the engineering concerns relating to observation of large-scale networks of autonomic systems. It examines methods that can be used to manage scale and generalises and formalises them within a software engineering approach; guiding the development of an automated adaptive observation subsystem - the Global Observer Model. This approach uses a model-based representation of the observed system, represented by appropriately attached modelled elements; adapters between the underlying system and the observation subsystem. The concepts of Signature and Technique definitions describe large-scale or complex system characteristics and target selection techniques respectively. Collections of these objects are then utilised throughout the framework along with decision and deployment logic (collectively referred to as the Observer Behaviour Definition - an ECA-like observational control) to provide a runtime-adaptable observation overlay. The evaluation of this research is provided by demonstrations of the observation framework; firstly in experimental form for assessment of the Signature and Technique approach, and then by application to the Email Exploration Tool (EET), a forensic investigation utility

    Closing the gap between guidance and practice, an investigation of the relevance of design guidance to practitioners using object-oriented technologies

    Get PDF
    This thesis investigates if object oriented guidance is relevant in practice, and how this affects software that is produced. This is achieved by surveying practitioners and studying how constructs such as interfaces and inheritance are used in open-source systems. Surveyed practitioners framed 'good design' in terms of impact on development and maintenance. Recognition of quality requires practitioner judgement (individually and as a group), and principles are valued over rules. Time constraints heighten sensitivity to the rework cost of poor design decisions. Examination of open source systems highlights the use of interface and inheritance. There is some evidence of 'textbook' use of these structures, and much use is simple. Outliers are widespread indicating a pragmatic approach. Design is found to reflect the pressures of practice - high-level decisions justify 'designed' structures and architecture, while uncertainty leads to deferred design decisions - simpler structures, repetition, and unconsolidated design. Sub-populations of structures can be identified which may represent common trade-offs. Useful insights are gained into practitioner attitude to design guidance. Patterns of use and structure are identified which may aid in assessment and comprehension of object oriented systems.This thesis investigates if object oriented guidance is relevant in practice, and how this affects software that is produced. This is achieved by surveying practitioners and studying how constructs such as interfaces and inheritance are used in open-source systems. Surveyed practitioners framed 'good design' in terms of impact on development and maintenance. Recognition of quality requires practitioner judgement (individually and as a group), and principles are valued over rules. Time constraints heighten sensitivity to the rework cost of poor design decisions. Examination of open source systems highlights the use of interface and inheritance. There is some evidence of 'textbook' use of these structures, and much use is simple. Outliers are widespread indicating a pragmatic approach. Design is found to reflect the pressures of practice - high-level decisions justify 'designed' structures and architecture, while uncertainty leads to deferred design decisions - simpler structures, repetition, and unconsolidated design. Sub-populations of structures can be identified which may represent common trade-offs. Useful insights are gained into practitioner attitude to design guidance. Patterns of use and structure are identified which may aid in assessment and comprehension of object oriented systems

    A simple reflective object kernel

    Get PDF
    International audienc
    corecore