418,424 research outputs found

    An Architectural Framework for Performance Analysis: Supporting the Design, Configuration, and Control of DIS /HLA Simulations

    Get PDF
    Technology advances are providing greater capabilities for most distributed computing environments. However, the advances in capabilities are paralleled by progressively increasing amounts of system complexity. In many instances, this complexity can lead to a lack of understanding regarding bottlenecks in run-time performance of distributed applications. This is especially true in the domain of distributed simulations where a myriad of enabling technologies are used as building blocks to provide large-scale, geographically disperse, dynamic virtual worlds. Persons responsible for the design, configuration, and control of distributed simulations need to understand the impact of decisions made regarding the allocation and use of the logical and physical resources that comprise a distributed simulation environment and how they effect run-time performance. Distributed Interactive Simulation (DIS) and High Level Architecture (HLA) simulation applications historically provide some of the most demanding distributed computing environments in terms of performance, and as such have a justified need for performance information sufficient to support decision-makers trying to improve system behavior. This research addresses two fundamental questions: (1) Is there an analysis framework suitable for characterizing DIS and HLA simulation performance? and (2) what kind of mechanism can be used to adequately monitor, measure, and collect performance data to support different performance analysis objectives for DIS and HLA simulations? This thesis presents a unified, architectural framework for DIS and HLA simulations, provides details on a performance monitoring system, and shows its effectiveness through a series of use cases that include practical applications of the framework to support real-world U.S. Department of Defense (DoD) programs. The thesis also discusses the robustness of the constructed framework and its applicability to performance analysis of more general distributed computing applications

    Design of a framework for automated service mashup creation and execution based on semantic reasoning

    Get PDF
    Instead of building self-contained silos, applications are being broken down in independent structures able to offer a scoped service using open communication standards and encoding. Nowadays there is no automatic environment for the construction of new mashups from these reusable services. At the same time the designer of the mashup needs to establish the actual locations for deployment of the different components. This paper introduces the development of a framework focusing on the dynamic creation and execution of service mashups. By enriching the available building blocks with semantic descriptions, new service mashups are automatically composed through the use of planning algorithms. The composed mashups are automatically deployed on the available resources making optimal use of bandwidth, storage and computing power of the network and server elements. The system is extended with dynamic recovery from resource and network failures. This enrichment of business components and services with semantics, reasoning, and distributed deployment is demonstrated by means of an e-shop use case

    Analysis of DC microgrids as stochastic hybrid systems

    Get PDF
    A modeling framework for dc microgrids and distribution systems based on the dual active bridge (DAB) topology is presented. The purpose of this framework is to accurately characterize dynamic behavior of multi-converter systems as a function of exogenous load and source inputs. The base model is derived for deterministic inputs and then extended for the case of stochastic load behavior. At the core of the modeling framework is a large-signal DAB model that accurately describes the dynamics of both ac and dc state variables. This model addresses limitations of existing DAB converter models, which are not suitable for system-level analysis due to inaccuracy and poor upward scalability. The converter model acts as a fundamental building block in a general procedure for constructing models of multi-converter systems. System-level model construction is only possible due to structural properties of the converter model that mitigate prohibitive increases in size and complexity. To characterize the impact of randomness in practical loads, stochastic load descriptions are included in the deterministic dynamic model. The combined behavior of distributed loads is represented by a continuous-time stochastic process. Models that govern this load process are generated using a new modeling procedure, which builds incrementally from individual device-level representations. To merge the stochastic load process and deterministic dynamic models, the microgrid is modeled as a stochastic hybrid system. The stochastic hybrid model predicts the evolution of moments of dynamic state variables as a function of load model parameters. Moments of dynamic states provide useful approximations of typical system operating conditions over time. Applications of the deterministic models include system stability analysis and computationally efficient time-domain simulation. The stochastic hybrid models provide a framework for performance assessment and optimization --Abstract, page iv

    Role of Optical Network in Cloud/Fog Computing

    Get PDF
    This chapter is a study of exploring the role of the optical network in the cloud/fog computing environment. With the growing network issues, unified and cost-effective computing services and efficient utilization of optical resources are required for building smart applications. Fog computing provides the foundation platform for implementing cyber-physical system (CPS) applications which require ultra-low latency. Also, the digital revolution of fog/cloud computing using optical resources has upgraded the education system by intertwined VR using the fog nodes. Presently, the current technologies face many challenges such as ultra-low delay, optimum bandwidth, and minimum energy consumption to promote virtual reality (VR)-based and electroencephalogram (EEG)-based gaming applications. Ultra-low delay, optimum bandwidth, and minimum energy consumption. Therefore, an Optical-Fog layer is introduced to provide a novel, secure, highly distributed, and ultra-dense fog computing infrastructure. Also, for optimum utilization of optical resources, a novel concept of OpticalFogNode is introduced that provides computation and storage capabilities at the Optical-Fog layer in the software defined networking (SDN)-based optical network. It efficiently facilitates the dynamic deployment of new distributed SDN-based OpticalFogNode which supports low-latency services with minimum energy as well as bandwidth usage. Therefore, an EEG-based VR framework is also introduced that uses the resources of the optical network in the cloud/fog computing environment

    A tuple space based agent programming framework

    Get PDF
    Software agent has become a research focus in distributed systems in recent years. This thesis aims at developing a methodology that facilitates the design and implementation of distributed agent applications. We propose an agent programming model called TSAM, which is a development framework for building distributed agent systems. TSAM provides an agent architecture that distinguishes three types of agent behaviors as (i) sensory behaviors, (ii) reactive behaviors, and (iii) proactive behaviors. Role models are used to design different proactive behaviors assigned to an agent. TSAM supports agent couplings with both message passing and distributed tuple spaces. A tuple space facilitates dynamic coordination among a group of agents that work together towards a common goal. We apply TSAM to an example of an e-market system to validate its usefulness, simplicity and support for dynamic couplings among application agents. Performance testing is conducted on the implemented system to demonstrate that the flexibility of tuple space based coordination does not incur significant runtime overhead when compared with message passing

    Tele-archaeology

    Get PDF
    Tele-archaeology, in its basic sense, may be defined as the use of telecommunications to provide archaeological information and services. Two different kinds of technology make up most of the tele-archaeology applications in use today. The first is used for transferring information from one location to another. The other is multi-way interactive knowledge distribution. In this paper we examine the possibilities of tele-archaeology, and offer a general framework to implement this technology. The main positive effect of tele-archaeology is the move towards a real “distributed interactive archaeology”, which means that archaeological knowledge building is a collective and dynamic series of tasks and processes. An individual archaeologist cannot fully explain his/her data because the explanatory process needs knowledge as raw material, and this knowledge does not exist in the individual mind of the scientist but in the research community as a global set

    Hadoop MapReduce for Mobile Cloud

    Get PDF
    The new generations of mobile devices have high processing power and storage, but they lag behind in terms of software systems for big data storage and processing. Hadoop is a scalable platform that provides distributed storage and computational capabilities on clusters of commodity hardware. Building Hadoop on a mobile net- work enables the devices to run data intensive computing applications without direct knowledge of underlying distributed systems complexities. However, these applications have severe energy and reliability constraints (e.g., caused by unexpected device failures or topology changes in a dynamic network). As mobile devices are more susceptible to unauthorized access when compared to traditional servers, security is also a concern for sensitive data. Hence, it is paramount to consider reliability, energy efficiency and security for such applications. The goal of this thesis is to bring Hadoop MapReduce framework to a mobile cloud environment such that it solves these bottlenecks involved in big data processing. The Mobile Distributed File System(MDFS) addresses these issues for big data processing in mobile clouds. We have developed the Hadoop MapReduce framework over MDFS and have evaluated its performance by varying input workloads in a real heterogeneous mobile cluster. Our evaluation shows that the implementation addresses all constraints in processing large amounts of data in mobile clouds. Thus, our system is a viable solution to meet the growing demands of data processing in a mobile environment
    corecore