8,180 research outputs found

    SOPA - a self organizing processing and streaming architecture

    Get PDF
    This paper describes SOPA, a component framework that is an essential part of the lecture recording system E-Chalk. It envisiones a general processing and streaming architecture featuring autonomous assembly of stream processing components. The goal is to provide an easy to use framework where dynamically organized processing graphs are build out of components from various distributed sources. Based on state-of-the-art solutions for component based software development the system simplifies the implementation and the configuration of multimedia streaming applications and associated tools. It supports stream synchronization transparently while extending components are installed on the fly according to the existing requirements that may change at any time

    OntoTrader

    Get PDF
    Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated

    Meta-scheduling Issues in Interoperable HPCs, Grids and Clouds

    Get PDF
    Over the last years, interoperability among resources has been emerged as one of the most challenging research topics. However, the commonality of the complexity of the architectures (e.g., heterogeneity) and the targets that each computational paradigm including HPC, grids and clouds aims to achieve (e.g., flexibility) remain the same. This is to efficiently orchestrate resources in a distributed computing fashion by bridging the gap among local and remote participants. Initially, this is closely related with the scheduling concept which is one of the most important issues for designing a cooperative resource management system, especially in large scale settings such as in grids and clouds. Within this context, meta-scheduling offers additional functionalities in the area of interoperable resource management, this is because of its great agility to handle sudden variations and dynamic situations in user demands. Accordingly, the case of inter-infrastructures, including InterCloud, entitle that the decentralised meta-scheduling scheme overcome issues like consolidated administration management, bottleneck and local information exposition. In this work, we detail the fundamental issues for developing an effective interoperable meta-scheduler for e-infrastructures in general and InterCloud in particular. Finally, we describe a simulation and experimental configuration based on real grid workload traces to demonstrate the interoperable setting as well as provide experimental results as part of a strategic plan for integrating future meta-schedulers

    Security in a Distributed Processing Environment

    Get PDF
    Distribution plays a key role in telecommunication and computing systems today. It has become a necessity as a result of deregulation and anti-trust legislation, which has forced businesses to move from centralised, monolithic systems to distributed systems with the separation of applications and provisioning technologies, such as the service and transportation layers in the Internet. The need for reliability and recovery requires systems to use replication and secondary backup systems such as those used in ecommerce. There are consequences to distribution. It results in systems being implemented in heterogeneous environment; it requires systems to be scalable; it results in some loss of control and so this contributes to the increased security issues that result from distribution. Each of these issues has to be dealt with. A distributed processing environment (DPE) is middleware that allows heterogeneous environments to operate in a homogeneous manner. Scalability can be addressed by using object-oriented technology to distribute functionality. Security is more difficult to address because it requires the creation of a distributed trusted environment. The problem with security in a DPE currently is that it is treated as an adjunct service, i.e. and after-thought that is the last thing added to the system. As a result, it is not pervasive and therefore is unable to fully support the other DPE services. DPE security needs to provide the five basic security services, authentication, access control, integrity, confidentiality and non-repudiation, in a distributed environment, while ensuring simple and usable administration. The research, detailed in this thesis, starts by highlighting the inadequacies of the existing DPE and its services. It argues that a new management structure was introduced that provides greater flexibility and configurability, while promoting mechanism and service independence. A new secure interoperability framework was introduced which provides the ability to negotiate common mechanism and service level configurations. New facilities were added to the non-repudiation and audit services. The research has shown that all services should be security-aware, and therefore would able to interact with the Enhanced Security Service in order to provide a more secure environment within a DPE. As a proof of concept, the Trader service was selected. Its security limitations were examined, new security behaviour policies proposed and it was then implemented as a Security-aware Trader, which could counteract the existing security limitations.IONA TECHNOLOGIES PLC & ORANG

    A study in systems integration architecture

    Get PDF
    This Thesis studies the two architectures OSCA and ANSA which support the ODPSE principle in the first two parts. In the third part the framework for integrating these two architectures is described. The idea of integration architectures in relation to open architectures is studied using the enabling technologies

    Analysis and selection of the simulation environment

    Get PDF
    This document provides the initial report of the Simulation work package (Work Package 4,WP4) of the CATNETS project. It contains an analisys of the requirements for a simulation tool to be used in CATNETS and an evaluation of a number of grid and general purpose simulators with respect to the selected requirements. A reasoned choice of a suitable simulator is performed based on the evaluation conducted. -- Diese Arbeit analysiert die Anforderungen an eine Simulationsumgebung für die Analyse der Katallaxie. Anhand von Kennzahlen wird die Auswahl der Simulationsumgebung bestimmt.Grid Computing

    Modeling Big Data based Systems through Ontological Trading

    Get PDF
    One of the great challenges the information society faces is dealing with the huge amount of information generated and handled daily on the Internet. Today, progress in Big Data proposals attempt to solve this problem, but there are certain limitations to information search and retrieval due basically to the large volumes handled, the heterogeneity of the information and its dispersion among a multitude of sources. In this article, a formal framework is defined to facilitate the design and development of an Environmental Management Information System which works with an heterogeneous and large amount of data. Nevertheless, this framework can be applied to other information systems that work with Big Data, since it does not depend on the type of data and can be utilized in other domains. The framework is based on an Ontological Web-Trading Model (OntoTrader) which follows Model-Driven Engineering and Ontology-Driven Engineering guidelines to separate the system architecture from its implementation. The proposal is accompanied by a case study, SOLERES-KRS, an Environmental Knowledge Representation System designed and developed using Software Agents and Multi-Agent Systems

    Characterization of new flexible players: Deliverable D3.2

    Get PDF
    Project TradeRES - New Markets Design & Models for 100% Renewable Power Systems: https://traderes.eu/about/ABSTRACT: The subject matter of this report is the analysis of the electricity markets’ actors’ scene, through the identification of actor classes and the characterisation of actors from a behavioural and an operational perspective. The technoeconomic characterization of market participants aims to support the upcoming model enhancements by aligning the agent-based model improvements with the modern market design challenges and the contemporary characteristics of players. This work has been conducted in the context of task T3.2, which focuses on the factorization of the distinctive operational and behavioural characteristics of players in market structures. Traditional parties have been considered together with new and emerging roles, while special focus has been given on new actors related to flexible technologies and demand-side response. Among the main objectives have been the characterization of individual behaviours, objectives and requirements of different electricity market players, considering both the traditional entities and the new distributed ones, and the detailed representation of the new actors.N/

    Design of a Teleworking Service Using Parlay Framework Federation

    Get PDF
    Faculty of Engineering and Built Enviroment School of Electrical and Information Engineering 0314356t [email protected] teleworking service allows people to work effectively together from home or other approved locations away from the regular work site, on an established work schedule. This is made possible via the use of Information and Communications Technology (ICT). Presently, there are isolated applications that can assist teleworkers, such as e-mail and video conferencing, which were developed for use over the Internet. But the Internet is a best-effort network with no guarantee of Quality of service (QoS), low security and no standard billing system. The design of this teleworking service involves the integration of many existing services like e-mail, messaging, video conferencing, shared whiteboard and database access. Other requirements are for service providers interworking for service and resource usage, security, and QoS specification. Hence, we explore the emerging open service concept to create this integrated teleworking service that can be made available for subscription by corporate bodies and individuals. Service federation is the interaction between teleworkers across service provider domains. It is achieved via the interworking of providers’ services, and is an essential aspect of teleworking. We have realised a service federation in a secure and seamless manner in the OSA / Parlay environment via the use of the OSA / Parlay framework. We looked at the use of a framework federation for the actual implementation of service federation. This framework federation is an interworking of frameworks based on an agreed-upon federation contract between them. New framework interfaces were introduced to facilitate this proposed solution, as the OSA / Parlay specifications do not yet support this approach. Service composition is the creation of a new service instance by composing one or more other services. We implemented this via the use of framework and trader federation. The trader federation was used to locate services or users in different ASP domains. A high level design of the teleworking service was done with federation explored for actual implementation. The Common Object Request Broker Architecture (CORBA) trading service was used to prove the concept. The RM-ODP methodology is followed in this teleworking service design. The OSA / Parlay terminal capability, generic call control, multiparty and location and Service Capability Features (SCF) were used for implementing in the CORBA Distributed Processing Environment (DPE)

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    corecore