5,669 research outputs found

    Cross-middleware Interoperability in Distributed Concurrent Engineering

    No full text
    Secure, distributed collaboration between different organizations is a key challenge in Grid computing today. The GDCD project has produced a Grid-based demonstrator Virtual Collaborative Facility (VCF) for the European Space Agency. The purpose of this work is to show the potential of Grid technology to support fully distributed concurrent design, while addressing practical considerations including network security, interoperability, and integration of legacy applications. The VCF allows domain engineers to use the concurrent design methodology in a distributed fashion to perform studies for future space missions. To demonstrate the interoperability and integration capabilities of Grid computing in concurrent design, we developed prototype VCF components based on ESA’s current Excel-based Concurrent Design Facility (a non-distributed environment), using a STEP-compliant database that stores design parameters. The database was exposed as a secure GRIA 5.1 Grid service, whilst a .NET/WSE3.0-based library was developed to enable secure communication between the Excel client and STEP database

    A COLLABORATIVE MODEL FOR VIRTUAL ENTERPRISE

    Get PDF
    Collaborative process characteristics have three dimensions: actors, activities and action’s logic. The aim of this paper is to present a virtual portal’s model that helps managing consortiums. Our model based on dynamic e-collaboration and it has a modular structure, multilayer approach. System’s functionality of virtual enterprise is collaborative model is concern on users’ login, based on role and access control, searching and providing distributed resources, accessibility, metadata management and improved information’s management. Our proposal for developing solution offers a functional architecture of a virtual enterprise using dynamic e-collaboration and shared space.dynamic e-collaboration, multilayer solution, modular approach

    Comparative Analysis of Apache 2 Performance in Docker Containers vs Native Environment

    Get PDF
    Web servers have become crucial to facilitate access to and distribute such content on the internet. In this case, Docker containerization technology offers a solution. Docker allows developers to package applications and dependencies in one container, making deploying web servers faster and easier. But with these features, is there any performance that must be sacrificed if we choose to use docker in our web server deployment process. We will look at how much performance will be sacrificed. However, we must thoroughly analyze how Apache2 performs when running in a Docker container compared to running natively. That's why we're conducting a study to compare the performance of Apache2 in a Docker container versus a native environment using experimental methods. For this study, we'll use the Apache bench tool to test Apache2's performance in both environments. By experimenting, it should become clear how the performance of Docker containers compares to native environments when developing web servers. The research shows that Apache2 performance on native hosts is about 5-10% better than in a docker environment in handling small request loads. The better performance here refers to the parameters we tested: total time results, requests per second, and transfer speed. The request load variation can differ depending on the server specification itself. Although Docker offers features in terms of application isolation and scalability, our results show that running Apache2 natively is more efficient without changing its default configuration. The additional overhead Docker can be required to run the docker system in isolating the application; in this case, the virtualization layer is required to run Apache2 inside a Docker container. This can affect application performance and cause a slight performance degradation compared to using the host operating system directly. This research aims to inform developers about the performance difference between apache2 in Docker and the native environment. It will help them make informed decisions about deployment environments. Docker offers appealing features, but its performance may need to improve.  Test results show that the native host performs better, although its feature set is not as extensive as that of Docker

    Flexible Deployment of Social Media Analysis Tools, Flexible, Policy-Oriented and Multi-Cloud deployment of Social Media Analysis Tools in the COLA Project

    Get PDF
    The relationship between companies and customers and among public authorities and citizens has changed dramatically with the widespread utilisation of the Internet and Social Networks. To help governments to keep abreast of these changes, Inycom has developed Eccobuzz and Magician, a set of web applications for Social Media data mining. The unpredictable load of these applications requires flexible user-defined policies and automated scalability during deployment and execution time. Even more importantly, privacy norms require that data is restricted to certain physical locations. This paper explains how such applications are described with Application Description Templates (ADTs). ADTs define complex topology descriptions and various deployment, scalability and security policies, and how these templates are used by a submitter that translates this generic information into executable format for submission to the reference framework of the COLA European projec

    Process Personalization Framework for Service-Driven Enterprises

    Get PDF
    Service functions and service activities are integral part of enterprises. Although technologies have improved for developing service functions, errors persist in service activities. Noted computer scientist Ramamoorthy describes personalization, customization, and humanization of service functions as an effective approach for reducing error in service activities. This paper argues that current personalization approaches does not effectively address the entire spectrum of service functions. The proposed personalization framework can advance current state of personalization through enabling tools as services and services as tools. We discuss the framework utilizing biological research as a service-driven enterprise example. The proposed framework is based on our enterprise process personalization patent

    Mapping web personal learning environments

    Get PDF
    A recent trend in web development is to build platforms which are carefully designed to host a plurality of software components (sometimes called widgets or plugins) which can be organized or combined (mashed-up) at user's convenience to create personalized environments. The same holds true for the web development of educational applications. The degree of personalization can depend on the role of users such as in traditional virtual learning environment, where the components are chosen by a teacher in the context of a course. Or, it can be more opened as in a so-called personalized learning environment (PLE). It now exists a wide array of available web platforms exhibiting different functionalities but all built on the same concept of aggregating components together to support different tasks and scenarios. There is now an overlap between the development of PLE and the more generic developments in web 2.0 applications such as social network sites. This article shows that 6 more or less independent dimensions allow to map the functionalities of these platforms: the screen dimensionmaps the visual integration, the data dimension maps the portability of data, the temporal dimension maps the coupling between participants, the social dimension maps the grouping of users, the activity dimension maps the structuring of end users–interactions with the environment, and the runtime dimensionmaps the flexibility in accessing the system from different end points. Finally these dimensions are used to compare 6 familiar Web platforms which could potentially be used in the construction of a PLE

    Industry Simulation Gateway on a Scalable Cloud

    Get PDF
    Large scale simulation experimentation typically requires significant computational resources due to an excessive number of simulation runs and replications to be performed. The traditional approach to provide such computational power, both in academic research and industry/business applications, was to use computing clusters or desktop grid resources. However, such resources not only require upfront capital investment but also lack the flexibility and scalability that is required to serve a variable number of clients/users efficiently. This paper presents how SakerGrid, a commercial desktop grid based simulation platform and its associated science gateway have been extended towards a scalable cloud computing solution. The integration of SakerGrid with the MiCADO automated deployment and autoscaling framework supports the execution of multiple simulation experiments by dynamically allocating virtual machines in the cloud in order to complete the experiment by a user-defined deadline
    corecore