969 research outputs found

    A metadata service for service oriented architectures

    Get PDF
    Service oriented architectures provide a modern paradigm for web services allowing seamless interoperation among network applications and supporting a flexible approach to building large complex information systems. A number of industrial standards have emerged to exploit this paradigm with the development o f the J2E E and .N E T infrastructure platforms, communication protocol SOAP, d e scription language WSDL and orchestration languages BPEL, XLANG and WSCI. At the same time the Semantic Web enables automated use of ontologies to describe web services in a machine interpretable language. To enable process composition and large scale resource integration over heterogeneous sources a new research in itiative is needed. Current initiatives have identified the role of Peer-to-Peer networks and Service Oriented Architectures to enable large scale resource communication an d integration. However this approach neglects to identify or utilise the role of Semantic Web technologies to promote greater automation and reliability using service semantics, thus a new framework is required adopting Peer-to-Peer networks, Service Oriented Architectures and Semantic Web technologies. In this context, this thesis presents a management an d storage framework for a distributed service repository over a super peer network to facilitate process composition

    MegSDF Mega-system development framework

    Get PDF
    A framework for developing large, complex software systems, called Mega-Systems, is specified. The framework incorporates engineering, managerial, and technological aspects of development, concentrating on an engineering process. MegSDF proposes developing Mega-Systems as open distributed systems, pre-planned to be integrated with other systems, and designed for change. At the management level, MegSDF divides the development of a Mega-System into multiple coordinated projects, distinguishing between a meta-management for the whole development effort, responsible for long-term, global objectives, and local managements for the smaller projects, responsible for local, temporary objectives. At the engineering level, MegSDF defines a process model which specifies the tasks required for developing Mega-Systems, including their deliverables and interrelationships. The engineering process emphasizes the coordination required to develop the constituent systems. The process is active for the life time of the Mega-System and compatible with different approaches for performing its tasks. The engineering process consists of System, Mega-System, Mega-System Synthesis, and Meta-Management tasks. System tasks develop constituent systems. Mega-Systems tasks provide a means for engineering coordination, including Domain Analysis, Mega-System Architecture Design. and Infrastructure Acquisition tasks. Mega-System Synthesis tasks assemble Mega-Systems from the constituent systems. The Meta-Management task plans and controls the entire process. The domain analysis task provides a general, comprehensive, non-constructive domain model, which is used as a common basis for understanding the domain. MegSDF builds the domain model by integrating multiple significant perceptions of the domain. It recommends using a domain modeling schema to facilitate modeling and integrating the multiple perceptions. The Mega-System architecture design task specifies a conceptual architecture and an application architecture. The conceptual architecture specifies common design and implementation concepts and is defined using multiple views. The application architecture maps the domain model into an implementation and defines the overall structure of the Mega-System, its boundaries, components, and interfaces. The infrastructure acquisition task addresses the technological aspects of development. It is responsible for choosing, developing or purchasing, validating, and supporting an infrastructure. The infrastructure integrates the enabling technologies into a unified platform which is used as a common solution for handling technologies. The infrastructure facilitates portability of systems and incorporation of new technologies. It is implemented as a set of services, divided into separate service groups which correspond to the views identified in the conceptual architecture

    Service management for multi-domain Active Networks

    Get PDF
    The Internet is an example of a multi-agent system. In our context, an agent is synonymous with network operators, Internet service providers (ISPs) and content providers. ISPs mutually interact for connectivity's sake, but the fact remains that two peering agents are inevitably self-interested. Egoistic behaviour manifests itself in two ways. Firstly, the ISPs are able to act in an environment where different ISPs would have different spheres of influence, in the sense that they will have control and management responsibilities over different parts of the environment. On the other hand, contention occurs when an ISP intends to sell resources to another, which gives rise to at least two of its customers sharing (hence contending for) a common transport medium. The multi-agent interaction was analysed by simulating a game theoretic approach and the alignment of dominant strategies adopted by agents with evolving traits were abstracted. In particular, the contention for network resources is arbitrated such that a self-policing environment may emerge from a congested bottleneck. Over the past 5 years, larger ISPs have simply peddled as fast as they could to meet the growing demand for bandwidth by throwing bandwidth at congestion problems. Today, the dire financial positions of Worldcom and Global Crossing illustrate, to a certain degree, the fallacies of over-provisioning network resources. The proposed framework in this thesis enables subscribers of an ISP to monitor and police each other's traffic in order to establish a well-behaved norm in utilising limited resources. This framework can be expanded to other inter-domain bottlenecks within the Internet. One of the main objectives of this thesis is also to investigate the impact on multi-domain service management in the future Internet, where active nodes could potentially be located amongst traditional passive routers. The advent of Active Networking technology necessitates node-level computational resource allocations, in addition to prevailing resource reservation approaches for communication bandwidth. Our motivation is to ensure that a service negotiation protocol takes account of these resources so that the response to a specific service deployment request from the end-user is consistent and predictable. To promote the acceleration of service deployment by means of Active Networking technology, a pricing model is also evaluated for computational resources (e.g., CPU time and memory). Previous work in these areas of research only concentrate on bandwidth (i.e., communication) - related resources. Our pricing approach takes account of both guaranteed and best-effort service by adapting the arbitrage theorem from financial theory. The central tenet for our approach is to synthesise insights from different disciplines to address problems in data networks. The greater parts of research experience have been obtained during direct and indirect participation in the 1ST-10561 project known as FAIN (Future Active IP Networks) and ACTS-AC338 project called MIAMI (Mobile Intelligent Agent for Managing the Information Infrastructure). The Inter-domain Manager (IDM) component was integrated as an integral part of the FAIN policy-based network management systems (PBNM). Its monitoring component (developed during the MIAMI project) learns about routing changes that occur within a domain so that the management system and the managed nodes have the same topological view of the network. This enabled our reservation mechanism to reserve resources along the existing route set up by whichever underlying routing protocol is in place

    Requirements of the SALTY project

    Get PDF
    This document is the first external deliverable of the SALTY project (Self-Adaptive very Large disTributed sYstems), funded by the ANR under contract ANR-09-SEGI-012. It is the result of task 1.1 of the Work Package (WP) 1 : Requirements and Architecture. Its objective is to identify and collect requirements from use cases that are going to be developed in WP 4 (Use cases and Validation). Based on the study and classification of the use cases, requirements against the envisaged framework are then determined and organized in features. These features will aim at guide and control the advances in all work packages of the project. As a start, features are classified, briefly described and related scenarios in the defined use cases are pinpointed. In the following tasks and deliverables, these features will facilitate design by assigning priorities to them and defining success criteria at a finer grain as the project progresses. This report, as the first external document, has no dependency to any other external documents and serves as a reference to future external documents. As it has been built from the use cases studies that have been synthesized in two internal documents of the project, extracts from the two documents are made available as appendices (cf. appen- dices B and C)

    Quality of service management in service-oriented grids

    Get PDF
    Grid computing provides a robust paradigm for aggregating disparate resources in a secure and controlled environment. The emerging grid infrastructure gives rise to a class of scientific applications and services in support of collaborative and distributed resource-sharing requirements, as part of teleimmersion, visualization and simulation services. Because such applications operate in a collaborative mode, data must be stored, processed and delivered in a timely manner. Such classes of applications have collaborative and distributed resource-sharing requirements, and have stringent real-time constraints and quality-of-service (QoS) requirements. A QoS management approach is therefore essential to orchestrate and guarantee the interaction among such applications in a distributed computing environment. Grid architectures require an underpinning of QoS support to manage complex computation-intensive and data-intensive applications, as current grid middleware solutions lack QoS provision. QoS guarantees in the grid context have, however, not been given the importance they merit. To enhance its functionality, a computational grid must be overlaid with an advanced QoS architecture to best execute those applications with real-time constraints. This thesis reports on the design and implementation of a software framework, called Grid QoS Management (G-QoSm). G-QoSm incorporates a new QoS management model and provides a service-oriented QoS management approach that supports the Open Grid Service Architecture. Its novel features include grid-service discovery based on QoS attributes, immediate and advance resource reservation, service execution with QoS constraints, and techniques for QoS adaptation to compensate for resource degradation, and to optimise resource allocation while maintaining a service level agreement. The benefits of G-QoSm are demonstrated by prototype test-beds that integrate scientific grid applications and simulate grid data-transfer applications. Results show that the grid application and the data-transfer simulation have better performance when used with the proposed QoS approach. QoS abstractions are presented for building QoS-aware applications, in the context of service-oriented grids. These abstractions are application programming interfaces to facilitate application developers utilising the proposed QoS management solution.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Big data quality framework: a holistic approach to continuous quality management

    Get PDF
    Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper

    Security Architecture for Swarms of Autonomous Vehicles in Smart Farming

    Get PDF
    Nowadays, autonomous vehicles are incorporated into farms to facilitate manual labour. Being connected vehicles, as IoT systems, they are susceptible to cyber security attacks that try to cause damage to hardware, software or even living beings. Therefore, it is important to provide sufficient security mechanisms to protect both the communications and the data, mitigating any possible risk or harm to farmers, livestock or crops. Technology providers are aware of the importance of ensuring security, and more and more secure solutions can be found on the market today. However, generally, these particular solutions are not sufficient when they are part of complex hybrid systems, since there is no single global solution proposal. In addition, as the number of technologies and protocols used increases, the number of security threats also increases. This article presents a cyber-security architecture proposal for swarms of heterogeneous vehicles in smart farming, which covers all of the aspects recommended by the ISO 7798-2 specification in terms of security. As a result of this analysis, a detailed summary of the possible solutions and available technologies for each of the communication channels of the target system as well as some recommendations are presented.ECSEL JU (H2020–EU.2.1.1.7.–ECSEL RIA) and the Spanish Ministry of Economic Affairs and Digital Transformatio

    Decentralized Identity and Access Management Framework for Internet of Things Devices

    Get PDF
    The emerging Internet of Things (IoT) domain is about connecting people and devices and systems together via sensors and actuators, to collect meaningful information from the devices surrounding environment and take actions to enhance productivity and efficiency. The proliferation of IoT devices from around few billion devices today to over 25 billion in the next few years spanning over heterogeneous networks defines a new paradigm shift for many industrial and smart connectivity applications. The existing IoT networks faces a number of operational challenges linked to devices management and the capability of devices’ mutual authentication and authorization. While significant progress has been made in adopting existing connectivity and management frameworks, most of these frameworks are designed to work for unconstrained devices connected in centralized networks. On the other hand, IoT devices are constrained devices with tendency to work and operate in decentralized and peer-to-peer arrangement. This tendency towards peer-to-peer service exchange resulted that many of the existing frameworks fails to address the main challenges faced by the need to offer ownership of devices and the generated data to the actual users. Moreover, the diversified list of devices and offered services impose that more granular access control mechanisms are required to limit the exposure of the devices to external threats and provide finer access control policies under control of the device owner without the need for a middleman. This work addresses these challenges by utilizing the concepts of decentralization introduced in Distributed Ledger (DLT) technologies and capability of automating business flows through smart contracts. The proposed work utilizes the concepts of decentralized identifiers (DIDs) for establishing a decentralized devices identity management framework and exploits Blockchain tokenization through both fungible and non-fungible tokens (NFTs) to build a self-controlled and self-contained access control policy based on capability-based access control model (CapBAC). The defined framework provides a layered approach that builds on identity management as the foundation to enable authentication and authorization processes and establish a mechanism for accounting through the adoption of standardized DLT tokenization structure. The proposed framework is demonstrated through implementing a number of use cases that addresses issues related identity management in industries that suffer losses in billions of dollars due to counterfeiting and lack of global and immutable identity records. The framework extension to support applications for building verifiable data paths in the application layer were addressed through two simple examples. The system has been analyzed in the case of issuing authorization tokens where it is expected that DLT consensus mechanisms will introduce major performance hurdles. A proof of concept emulating establishing concurrent connections to a single device presented no timed-out requests at 200 concurrent connections and a rise in the timed-out requests ratio to 5% at 600 connections. The analysis showed also that a considerable overhead in the data link budget of 10.4% is recorded due to the use of self-contained policy token which is a trade-off between building self-contained access tokens with no middleman and link cost

    Quality of service management in service-oriented grids

    Get PDF
    Grid computing provides a robust paradigm for aggregating disparate resources in a secure and controlled environment. The emerging grid infrastructure gives rise to a class of scientific applications and services in support of collaborative and distributed resource-sharing requirements, as part of teleimmersion, visualization and simulation services. Because such applications operate in a collaborative mode, data must be stored, processed and delivered in a timely manner. Such classes of applications have collaborative and distributed resource-sharing requirements, and have stringent real-time constraints and quality-of-service (QoS) requirements. A QoS management approach is therefore essential to orchestrate and guarantee the interaction among such applications in a distributed computing environment. Grid architectures require an underpinning of QoS support to manage complex computation-intensive and data-intensive applications, as current grid middleware solutions lack QoS provision. QoS guarantees in the grid context have, however, not been given the importance they merit. To enhance its functionality, a computational grid must be overlaid with an advanced QoS architecture to best execute those applications with real-time constraints. This thesis reports on the design and implementation of a software framework, called Grid QoS Management (G-QoSm). G-QoSm incorporates a new QoS management model and provides a service-oriented QoS management approach that supports the Open Grid Service Architecture. Its novel features include grid-service discovery based on QoS attributes, immediate and advance resource reservation, service execution with QoS constraints, and techniques for QoS adaptation to compensate for resource degradation, and to optimise resource allocation while maintaining a service level agreement. The benefits of G-QoSm are demonstrated by prototype test-beds that integrate scientific grid applications and simulate grid data-transfer applications. Results show that the grid application and the data-transfer simulation have better performance when used with the proposed QoS approach. QoS abstractions are presented for building QoS-aware applications, in the context of service-oriented grids. These abstractions are application programming interfaces to facilitate application developers utilising the proposed QoS management solution
    corecore