235,497 research outputs found

    Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform

    Full text link
    This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.Comment: 9 page

    Computing in the RAIN: a reliable array of independent nodes

    Get PDF
    The RAIN project is a research collaboration between Caltech and NASA-JPL on distributed computing and data-storage systems for future spaceborne missions. The goal of the project is to identify and develop key building blocks for reliable distributed systems built with inexpensive off-the-shelf components. The RAIN platform consists of a heterogeneous cluster of computing and/or storage nodes connected via multiple interfaces to networks configured in fault-tolerant topologies. The RAIN software components run in conjunction with operating system services and standard network protocols. Through software-implemented fault tolerance, the system tolerates multiple node, link, and switch failures, with no single point of failure. The RAIN-technology has been transferred to Rainfinity, a start-up company focusing on creating clustered solutions for improving the performance and availability of Internet data centers. In this paper, we describe the following contributions: 1) fault-tolerant interconnect topologies and communication protocols providing consistent error reporting of link failures, 2) fault management techniques based on group membership, and 3) data storage schemes based on computationally efficient error-control codes. We present several proof-of-concept applications: a highly-available video server, a highly-available Web server, and a distributed checkpointing system. Also, we describe a commercial product, Rainwall, built with the RAIN technology

    Distributed Client/Server Architecture With Dynamic Middle Tier

    Get PDF
    Widespread use of computer networks and the demanding needs of current network applications and technology impose a challenge to use the bandwidths in an efficient manner so as to solve the network congestion and server overloading problems. Some past and on-going solutions such as server replications and caching have been proposed to overcome these deficiencies. However, these solutions have not been implemented in an economical and configuration-transparent manner. Moreover, the problems of caching and disseminating real-time multimedia data in a bandwidth-conservative manner have not been addressed. In this thesis, a CHEK Proxy Framework (CPF) has been developed using a proxy solution to address these problems. By caching, proxy has become a traditional solution in reducing user-perceived latency and network resource requirements in the network. CPF helps to create a middle-tier application platform proxy transparently and dynamically in the client sub-network to execute the sharable section of any server application codes. This is as the application proxy. Besides caching static web contents, this local application proxy helps to deliver real-time multimedia data on behalf of the remote server with lower bandwidth and better performance. CPF helps to minimize WAN connections while maximizing LAN interactions by multiplexing and de-multiplexing client requests through to the server via the proxy. As a result, the central server is made more reliable and scalable. The monitoring and management of the CHEK distributed objects is also made easier through the use of the CHEK Management Console (CMC). CMC displays the inter-relationships between the distributed objects and their status information on a GUI-based control panel for ease of management. With its dynamic and transparent features, software verslOrung and maintenance problems are readily overcome. CPF has been shown to be useful in most client/server applications, particularly those of broadcasting and collaborative nature such as video broadcastings and chat systems. CPF solves the network congestion and server overloading problems with the presence of a middle-tier proxy application platform which is allocated in the client sub-network with no manual configurations

    Verification of University Student and Graduate Data using Blockchain Technology

    Get PDF
    Blockchain is a reliable and innovative technology that harnesses education and training through digital technologies. Nonetheless, it has been still an issue keeping track of student/graduate academic achievement and blockchain access rights management. Detailed information about academic performance within a certain period (semester) is not present in the official education documents. Furthermore, academic achievement documents issued by institutions are not secured against unauthorized changes due to the involvement of intermediaries. Therefore, verification of official educational documents has become a pressing issue owing to the recent development of digital technologies. However, effective tools to accelerate the verification are rare as the process takes time. This study provides a prototype of the UniverCert platform based on a consortium version of the decentralized, open-source Ethereum blockchain technology. The proposed platform is based on a globally distributed peer-to-peer network that allows educational institutions to partner with the blockchain network, track student data, verify academic performance, and share documents with other stakeholders. The UniverCert platform was developed on a consortium blockchain architecture to address the problems universities face in storing and securing student data. The system provides a solution to facilitate students’ registration, verification, and authenticity of educational documents

    Di-ANFIS: an integrated blockchain–IoT–big data-enabled framework for evaluating service supply chain performance

    Get PDF
    Service supply chain management is a complex process because of its intangibility, high diversity of services, trustless settings, and uncertain conditions. However, the traditional evaluating models mostly consider the historical performance data and fail to predict and diagnose the problems’ root. This paper proposes a distributed, trustworthy, tamper-proof, and learning framework for evaluating service supply chain performance based on Blockchain and Adaptive Network-based Fuzzy Inference Systems (ANFIS) techniques, named Di-ANFIS. The main objectives of this research are: 1) presenting hierarchical criteria of service supply chain performance to cope with the diagnosis of the problems’ root; 2) proposing a smart learning model to deal with the uncertainty conditions by a combination of neural network and fuzzy logic, 3) and introducing a distributed Blockchain-based framework due to the dependence of ANFIS on big data and the lack of trust and security in the supply chain. Furthermore, the proposed six-layer conceptual framework consists of the data layer, connection layer, Blockchain layer, smart layer, ANFIS layer, and application layer. This architecture creates a performance management system using the Internet of Things (IoT), smart contracts, and ANFIS based on the Blockchain platform. The Di-ANFIS model provides a performance evaluation system without needing a third party and a reliable intermediary that provides an agile and diagnostic model in a smart and learning process. It also saves computing time and speeds up information flow.Service supply chain management is a complex process because of its intangibility, high diversity of services, trustless settings, and uncertain conditions. However, the traditional evaluating models mostly consider the historical performance data and fail to predict and diagnose the problems’ root. This paper proposes a distributed, trustworthy, tamper-proof, and learning framework for evaluating service supply chain performance based on Blockchain and Adaptive Network-based Fuzzy Inference Systems (ANFIS) techniques, named Di-ANFIS. The main objectives of this research are: 1) presenting hierarchical criteria of service supply chain performance to cope with the diagnosis of the problems’ root; 2) proposing a smart learning model to deal with the uncertainty conditions by a combination of neural network and fuzzy logic, 3) and introducing a distributed Blockchain-based framework due to the dependence of ANFIS on big data and the lack of trust and security in the supply chain. Furthermore, the proposed six-layer conceptual framework consists of the data layer, connection layer, Blockchain layer, smart layer, ANFIS layer, and application layer. This architecture creates a performance management system using the Internet of Things (IoT), smart contracts, and ANFIS based on the Blockchain platform. The Di-ANFIS model provides a performance evaluation system without needing a third party and a reliable intermediary that provides an agile and diagnostic model in a smart and learning process. It also saves computing time and speeds up information flow

    AIDA-DB: a data management architecture for the edge and cloud continuum

    Get PDF
    There is an increasing demand for stateful edge computing for both complex Virtual Network Functions (VNFs) and application services in emerging 5G networks. Managing a mutable persistent state in the edge does however bring new architectural, performance, and dependability challenges. Not only it has to be integrated with existing cloud-based systems, but also cope with both operational and analytical workloads and be compatible with a variety of SQL and NoSQL database management systems. We address these challenges with AIDA-DB, a polyglot data management architecture for the edge and cloud continuum. It leverages recent development in distributed transaction processing for a reliable mutable state in operational workloads, with a flexible synchronization mechanism for efficient data collection in cloud-based analytical workloads.Partially funded by project AIDA – Adaptive, Intelligent and Distributed Assurance Platform (POCI-01-0247- FEDER-045907) co-financed by the European Regional Development Fund (ERDF) through the Operational Program for Competitiveness and Internationalisation (COMPETE 2020) and by the Portuguese Foundation for Science and Technology (FCT) under CMU Portugal

    Guaranteed bandwidth implementation of message passing interface on workstation clusters

    Get PDF
    Due to their wide availability, networks of workstations (NOW) are an attractive platform for parallel processing. Parallel programming environments such as Parallel Virtual Machine (PVM), and Message Passing Interface (MPI) offer the user a convenient way to express parallel computing and communication for a network of workstations. Currently, a number of MPI implementations are available that offer low (average ) latency and high bandwidth environments to users by utilizing an efficient MPI library specification and high speed networks. In addition to high bandwidth and low average latency requirements, mission critical distributed applications, audio/video communications require a completely different type of service, guaranteed bandwidth and worst case delays (worst case latency) to be guaranteed by underlying protocol. The hypothesis presented in this paper is that it is possible to provide an application a low level reliable transport protocol with performance and guaranteed bandwidth as close to the hardware on which it is executing. The hypothesis is proven by designing and implementing a reliable high performance message passing protocol interface which also provides the guaranteed bandwidth to MPI and to mission critical distributed MPI applications. This protocol interface works with the Fiber Distributed Data Interface (FDDI) driver which has been designed and implemented for Performance Technology Inc. commercial high performance FDDI product, the Station Management Software 7.3, and the ADI / MPICH (Argonne National Laboratory and Mississippi State University\u27s free MPI implementation)

    Enabling digital grid for industrial revolution: self-healing cyber resilient platform

    Get PDF
    The key market objectives driving digital grid development are to provide sustainable, reliable and secure network systems that can support variety of applications against any potential cyber attacks. Therefore, there is an urgent demand to accelerate the development of intelligent Software-Defined Networking (SDN) platform that can address the tremendous challenges of data protection for digital resiliency. Modern grid technology tends to adopt distributed SDN controllers for further slicing power grid domain and protect the boundaries of electric data at network edges. To accommodate these issues, this article proposes an intelligent secure SDN controller for supporting digital grid resiliency, considering management coordination capability, to enable self-healing features and recovery of network traffic forwarding during service interruptions. A set of advanced features are employed in grid controllers to configure the network elements in response to possible disasters or link failures. In addition, various SDN topology scenarios are introduced for efficient coordination and configurations of network domains. Finally, to justify the potential advantages of intelligent secure SDN system, a case study is presented to evaluate the requirements of secure digital modern grid networks and pave the path towards the next phase of industry revolution

    A decentralized multi-agent based network management system for ICT4D networks

    Get PDF
    Network management is fundamental for assuring high quality services required by each user for the effective utilization of network resources. In this research, we propose the use of a decentralized, flexible and scalable Multi-Agent based system to monitor and manage rural broadband networks adaptively and efficiently. This mechanism is not novel as it has been used for high-speed, large-scale and distributed networks. This research investigates how software agents could collaborate in the process of managing rural broadband networks and developing an autonomous decentralized network management mechanism. In rural networks, network management is a challenging task because of lack of a reliable power supply, greater geographical distances, topographical barriers, and lack of technical support as well as computer repair facilities. This renders the network monitoring function complex and difficult. Since software agents are goal-driven, this research aims at developing a distributed management system that efficiently diagnoses errors on a given network and autonomously invokes effective changes to the network based on the goals defined on system agents. To make this possible, the Siyakhula Living Lab network was used as the research case study and existing network management system was reviewed and used as the basis for the proposed network management system. The proposed network management system uses JADE framework, Hyperic-Sigar API, Java networking programming and JESS scripting language to implement reasoning software agents. JADE and Java were used to develop the system agents with FIPA specifications. Hyperic-Sigar was used to collect the device information, Jpcap was used for collecting device network information and JESS for developing a rule engine for agents to reason about the device and network state. Even though the system is developed with Siyakhula Living Lab considerations, technically it can be used in any small-medium network because it is adaptable and scalable to various network infrastructure requirements. The proposed system consists of two types of agents, the MasterAgent and the NodeAgent. The MasterAgent resides on the device that has the agent platform and NodeAgent resides on devices connected to the network. The MasterAgent provides the network administrator with graphical and web user interfaces so that they can view network analysis and statistics. The agent platform provides agents with the executing environment and every agent, when started, is added to this platform. This system is platform independent as it has been tested on Linux, Mac and Windows platforms. The implemented system has been found to provide a suitable network management function to rural broadband networks that is: scalable in that more node agents can be added to the system to accommodate more devices in the network; autonomous in the ability to reason and execute actions based on the defined rules; fault-tolerant through being designed as a decentralized platform thereby reducing the Single Point of Failure (SPOF) in the system
    corecore