23 research outputs found

    SecNetworkCloudSim: An Extensible Simulation Tool for Secure Distributed Mobile Applications

    Get PDF
    Fueled by the wide interest for achieving rich-storage services with the lowest possible cost, cloud computing has emerged into a highly desired service paradigm extending well beyond Virtualization technology. The next generation of mobile cloud services is now manipulated more and more sensitive data on VM-based distributed applications. Therefore, the need to secure sensitive data over mobile cloud computing is more evident than ever. However, despite the widespread release of several cloud simulators, controlling user’s access and protecting data exchanges in distributed mobile applications over the cloud is considered a major challenge. This paper introduces a new NetworkCloudSim extension named SecNetworkCloudSim, a secure mobile simulation tool which is deliberately designed to ensure the preservation of confidential access to data hosted on mobile device and distributed cloud’s servers. Through high-level mobile users’ requests, users connect to an underlying proxy which is considered an important layer in this new simulator, where users perform secure authentication access to cloud services, allocate their tasks in secure VM-based policy, manage automatically the data confidentiality among VMs and derive high efficiency and coverage rates. Most importantly, due to the secure nature of proxy, user’s distributed tasks can be executed without alterations on different underlying proxy’s security policies. We implement a scenario of follow-up healthcare distributed application using the new extension

    Configuration Management for Distributed Software Services

    No full text
    The paper describes the SysMan approach to interactive configuration management of distributed software components (objects). Domains are used to group objects to apply policy and for convenient naming of objects. Configuration Management involves using a domain browser to locate relevant objects within the domain service; creating new objects which form a distributed service; allocating these objects to physical nodes in the system and binding the interfaces of the objects to each other and to existing services. Dynamic reconfiguration of the objects forming a service can be accomplished using this tool. Authorisation policies specify which domains are accessible by which managers and which interfaces can be bound together. Keywords Domains, object creation, object binding, object allocation, graphical management interface. 1 INTRODUCTION The object-oriented approach brings considerable benefits to the design and implementation of software for distributed systems (Kramer 1992). Con..

    Capsules and Semantic Regions for Code Visualization and Direct Manipulation of Live Programs

    Get PDF
    JPie is a visual programming environment supporting live construction of Java applications. Class modifications, such as declaring instance variables and overriding methods, take effect immediately on existing instances of the class to encourage experimentation in an educational setting. Because programs are edited live, editing gestures must transform the program from one well-formed state to another, without intermediate ambiguous states. To accomplish this, JPie’s visual representation provides capsules, which represent logical code units, and semantic regions, which represent different aspects of a program. A capsule’s meaning depends upon its containing semantic region. Similarly, a gesture, which involves manipulation of a capsule, is interpreted on the basis of the semantic region in which it occurs. This paper describes how capsules and semantic regions visually expose the structure of JPie programs and support live program editing through natural atomic gestures

    A survey of general-purpose experiment management tools for distributed systems

    Get PDF
    International audienceIn the field of large-scale distributed systems, experimentation is particularly difficult. The studied systems are complex, often nondeterministic and unreliable, software is plagued with bugs, whereas the experiment workflows are unclear and hard to reproduce. These obstacles led many independent researchers to design tools to control their experiments, boost productivity and improve quality of scientific results. Despite much research in the domain of distributed systems experiment management, the current fragmentation of efforts asks for a general analysis. We therefore propose to build a framework to uncover missing functionality of these tools, enable meaningful comparisons be-tween them and find recommendations for future improvements and research. The contribution in this paper is twofold. First, we provide an extensive list of features offered by general-purpose experiment management tools dedicated to distributed systems research on real platforms. We then use it to assess existing solutions and compare them, outlining possible future paths for improvements

    Using business workflows to improve quality of experiments in distributed systems research

    Get PDF
    International audienceDistributed systems pose many difficult problems to researchers. Due to their large-scale complexity, their numerous constituents (e.g., computing nodes, network links) tend to fail in unpredictable ways. This particular fragility of experiment execution threatens reproducibility, often considered to be a foundation of experimental science. Our poster presents a new approach to description and execution of experiments involving large-scale computer installations. The main idea consists in describing the experiment as workflow and using achievements of Business Workflow Management to reliably and efficiently execute it. Moreover, to facilitate the design process, the framework provides abstractions that hide unnecessary complexity from the user

    Leveraging business workflows in distributed systems research for the orchestration of reproducible and scalable experiments

    Get PDF
    National audienceWhile rapid research on distributed systems is observed, experiments in this field are often difficult to design, describe, conduct and reproduce. By overcoming these difficulties the research could be further stimulated and add more credibility to results in distributed systems research. The key factors responsible for this situation are technical (software bugs and hardware errors), methodological (incorrect practices), as well as social (reluctance to share work). In this paper, the existing approaches for the management of experiments on distributed systems are described and a novel approach using business process management (BPM) is presented to address their shortcomings. Then, the questions arising when such approach is taken, are addressed. We show that it can be a better alternative to the traditional way of performing experiments as it encourages better scientific practices and results in more valuable research and publications. Finally, a plan of our future work is outlined and other applications of this work are discussed.Malgré une activité de recherche sur les systèmes distribués très importante et très active, les expériences dans ce domaine sont souvent difficiles à concevoir, décrire, mener et reproduire. Surmonter ces difficultés pourrait permettre à ce domaine d'être encore plus stimulé, et aux résultats de gagner en crédibilité, à la fois dans le domaine des systèmes distribués. Les facteurs principaux responsables de cette situation sont techniques (bugs logiciels, problèmes matériels), méthodologiques (mauvaises pratiques), et sociaux (réticence à partager son travail). Dans cet article, les approches existantes pour la description et la conduite d'expériences sur les systèmes distribués sont décrites, et une nouvelle approche, utilisant le \textsl{Business Process Management (BPM)}, est présentée pour répondre à leurs limitations. Puis diverses questions se posant lors de l'utilisation d'une telle approche sont discutées. Nous montrons que cette approche peut être une meilleure alternative à la manière traditionnelle de conduire des expériences, qui encourage de meilleures pratiques scientifiques, et qui résulte en une recherche et des publications de meilleure qualité. Pour finir, notre plan de travail est décrit, et des applications possibles de ce travail dans d'autres domaines sont décrites

    A HyperNet Architecture

    Get PDF
    Network virtualization is becoming a fundamental building block of future Internet architectures. By adding networking resources into the “cloud”, it is possible for users to rent virtual routers from the underlying network infrastructure, connect them with virtual channels to form a virtual network, and tailor the virtual network (e.g., load application-specific networking protocols, libraries and software stacks on to the virtual routers) to carry out a specific task. In addition, network virtualization technology allows such special-purpose virtual networks to co-exist on the same set of network infrastructure without interfering with each other. Although the underlying network resources needed to support virtualized networks are rapidly becoming available, constructing a virtual network from the ground up and using the network is a challenging and labor-intensive task, one best left to experts. To tackle this problem, we introduce the concept of a HyperNet, a pre-built, pre-configured network package that a user can easily deploy or access a virtual network to carry out a specific task (e.g., multicast video conferencing). HyperNets package together the network topology configuration, software, and network services needed to create and deploy a custom virtual network. Users download HyperNets from HyperNet repositories and then “run” them on virtualized network infrastructure much like users download and run virtual appliances on a virtual machine. To support the HyperNet abstraction, we created a Network Hypervisor service that provides a set of APIs that can be called to create a virtual network with certain characteristics. To evaluate the HyperNet architecture, we implemented several example Hyper-Nets and ran them on our prototype implementation of the Network Hypervisor. Our experiments show that the Hypervisor API can be used to compose almost any special-purpose network – networks capable of carrying out functions that the current Internet does not provide. Moreover, the design of our HyperNet architecture is highly extensible, enabling developers to write high-level libraries (using the Network Hypervisor APIs) to achieve complicated tasks

    Asynchronous, hierarchical and scalable deployment of component-based applications

    Get PDF
    Abstract. The deployment of distributed component-based applications is a complex task. Proposed solutions are often centralized, which excludes their use for the deployment of large-scale applications. Besides, these solutions do often not take into account the functional constraints, i.e. the dependences between component activations. Finally, most of them are not fault-tolerant. In this paper, we propose a deployment application that deals with these three problems. It is hierarchical, which is a necessary feature to guarantee scalability. Moreover, it is designed as a distributed workflow decomposed into tasks executing asynchronously, which allows an "as soon as possible" activation of deployed components. Finally, the proposed deployment application is fault-tolerant. This is achieved by the use of persistent agents with atomic execution. This deployment application has been tested and performance measurements show that it is scalable

    A language and toolkit for the specification, execution and monitoring of dependable distributed applications

    Get PDF
    PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications out of existing applications, possibly legacy ones. With the automation of business processes on the increase, more and more applications of this kind are being constructed. The resulting applications can be quite complex, usually long-lived and are executed in a heterogeneous environment. In a distributed environment, long-lived activities need support for fault tolerance and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will change (nodes may fail, services may be moved elsewhere or withdrawn) during their execution and the specification will have to be modified. There is also a need for modularity, scalability and openness. However, most of the existing systems only consider part of these requirements. A new area of research, called workflow management has been trying to address these issues. This work first looks at what needs to be addressed to support the specification and execution of these new applications in a heterogeneous, distributed environment. A co- ordination language (scripting language) is developed that fulfils the requirements of specifying the composition and inter-dependencies of distributed applications with the properties of dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture of the overall workflow system and its implementation are then presented. The system has been implemented as a set of CORBA services and the execution environment is built using a transactional workflow management system. Next, the thesis describes the design of a toolkit to specify, execute and monitor distributed applications. The design of the co-ordination language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council, CaberNet, Northern Telecom (Nortel)

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control
    corecore