87,010 research outputs found

    On the Automated Synthesis of Enterprise Integration Patterns to Adapt Choreography-based Distributed Systems

    Full text link
    The Future Internet is becoming a reality, providing a large-scale computing environments where a virtually infinite number of available services can be composed so to fit users' needs. Modern service-oriented applications will be more and more often built by reusing and assembling distributed services. A key enabler for this vision is then the ability to automatically compose and dynamically coordinate software services. Service choreographies are an emergent Service Engineering (SE) approach to compose together and coordinate services in a distributed way. When mismatching third-party services are to be composed, obtaining the distributed coordination and adaptation logic required to suitably realize a choreography is a non-trivial and error prone task. Automatic support is then needed. In this direction, this paper leverages previous work on the automatic synthesis of choreography-based systems, and describes our preliminary steps towards exploiting Enterprise Integration Patterns to deal with a form of choreography adaptation.Comment: In Proceedings FOCLASA 2015, arXiv:1512.0694

    Use of Service Oriented Architecture for Scada Networks

    Get PDF
    Supervisory Control and Data Acquisition (SCADA) systems involve the use of distributed processing to operate geographically dispersed endpoint hardware components. They manage the control networks used to monitor and direct large-scale operations such as utilities and transit systems that are essential to national infrastructure. SCADA industrial control networks (ICNs) have long operated in obscurity and been kept isolated largely through strong physical security. Today, Internet technologies are increasingly being utilized to access control networks, giving rise to a growing concern that they are becoming more vulnerable to attack. Like SCADA, distributed processing is also central to cloud computing or, more formally, the Service Oriented Architecture (SOA) computing model. Certain distinctive properties differentiate ICNs from the enterprise networks that cloud computing developments have focused on. The objective of this project is to determine if modern cloud computing technologies can be also applied to improving dated SCADA distributed processing systems. Extensive research was performed regarding control network requirements as compared to those of general enterprise networks. Research was also conducted into the benefits, implementation, and performance of SOA to determine its merits for application to control networks. The conclusion developed is that some aspects of cloud computing might be usefully applied to SCADA systems but that SOA fails to meet ICN requirements in a certain essential areas. The lack of current standards for SOA security presents an unacceptable risk to SCADA systems that manage dangerous equipment or essential services. SOA network performance is also not sufficiently deterministic to suit many real-time hardware control applications. Finally, SOA environments cannot as yet address the regulatory compliance assurance requirements of critical infrastructure SCADA systems

    On the Move to Meaningful Internet Systems: OTM 2015 Workshops: Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, EI2N, FBM, INBAST, ISDE, META4eS, and MSC 2015, Rhodes, Greece, October 26-30, 2015. Proceedings

    Get PDF
    International audienceThis volume constitutes the refereed proceedings of the following 8 International Workshops: OTM Academy; OTM Industry Case Studies Program; Enterprise Integration, Interoperability, and Networking, EI2N; International Workshop on Fact Based Modeling 2015, FBM; Industrial and Business Applications of Semantic Web Technologies, INBAST; Information Systems, om Distributed Environment, ISDE; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society, META4eS; and Mobile and Social Computing for collaborative interactions, MSC 2015. These workshops were held as associated events at OTM 2015, the federated conferences "On The Move Towards Meaningful Internet Systems and Ubiquitous Computing", in Rhodes, Greece, in October 2015.The 55 full papers presented together with 3 short papers and 2 popsters were carefully reviewed and selected from a total of 100 submissions. The workshops share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies

    RAICS as advanced cloud backup technology in telecommunication networks

    Get PDF
    Data crashes can cause unpredictable and even hard-out effects for an enterprise or authority. Backup strategies as antidote unify a complex of organizational and technical measures that are necessary for data restoring, processing and transfer as well as for data security and defense against its loss, crash and tampering. High-performance modern Internet allows delivery of backup functions and is complemented by attractive (mobile) services with a Quality of Service comparable to that in Local Area Networks. One of the most efficient backup strategies acts the delegation of this functionality to an external provider, an online or Cloud Storage system. This article argues for a consideration of intelligently distributed backup over multiple storage providers in addition to the use of local resources. Some examples of Cloud Computing deployment in the USA, the European Union as well as in Ukraine and the Russian Federation are introduced to identify the benefits and challenges of distributed backup with Cloud Storage

    Semantic-Based Storage QoS Management Methodology -- Case Study for Distributed Environments

    Get PDF
    The distributed computing environments, e.g. clouds, often deal with huge amounts of data, which constantly increase. The global growth of data is caused by ubiquitous personal devices, enterprise and scientific applications, etc. As the size of data grows new challenges are emerging in the context of storage management. Modern data and storage resource management systems need to face wide range of problems -- minimizing energy consumption (green data centers), optimizing resource usage, throughput and capacity, data availability, security and legal issues, scalability. In addition users or their applications can have QoS (Quality of Service) requirements concerning the storage access, which further complicates the management. To cope with this problem a common mass storage system model taking into account the performance aspects of a storage system becomes a necessity. The model described with semantic technologies brings a semantic interoperability between the system components. In this paper we describe our approach at data management with QoS based on the developed models as a case study for distributed environments

    JWORB - Java Web Object Request Broker for Commodity Software based Visual Data ow Metacomputing Programming Environment

    Get PDF
    Programming environments and tools that are simultaneously sustainable, highly functional, robust and easy to use have been hard to come by in the HPDC area. This is partially due to the difficulty in developing sophisticated customized systems for what is relatively small part of the worldwide computing enterprise. As the commodity software becomes naturally distributed with the onset of Web and Intranets, we observe now a new trend in HPDC community [1, 8, 12] to base high performance computing on the modern enterprise computing technologies. .. JWORB is a multi-protocol Java server under development at NPAC, currently capable of handling HTTP and IIOP protocols. Hence, JWORB can be viewed as a Java-based Web Server which can also act as a BORBA broker. We present here JWORB rationale, architecture implementation status, results of early performance measurements and we illustrate its role in the new WebFlow system under development

    Mining Knowledge in Astrophysical Massive Data Sets

    Full text link
    Modern scientific data mainly consist of huge datasets gathered by a very large number of techniques and stored in very diversified and often incompatible data repositories. More in general, in the e-science environment, it is considered as a critical and urgent requirement to integrate services across distributed, heterogeneous, dynamic "virtual organizations" formed by different resources within a single enterprise. In the last decade, Astronomy has become an immensely data rich field due to the evolution of detectors (plates to digital to mosaics), telescopes and space instruments. The Virtual Observatory approach consists into the federation under common standards of all astronomical archives available worldwide, as well as data analysis, data mining and data exploration applications. The main drive behind such effort being that once the infrastructure will be completed, it will allow a new type of multi-wavelength, multi-epoch science which can only be barely imagined. Data Mining, or Knowledge Discovery in Databases, while being the main methodology to extract the scientific information contained in such MDS (Massive Data Sets), poses crucial problems since it has to orchestrate complex problems posed by transparent access to different computing environments, scalability of algorithms, reusability of resources, etc. In the present paper we summarize the present status of the MDS in the Virtual Observatory and what is currently done and planned to bring advanced Data Mining methodologies in the case of the DAME (DAta Mining & Exploration) project.Comment: Pages 845-849 1rs International Conference on Frontiers in Diagnostics Technologie

    An Open Framework for Developing Distributed Computing Environments for Multidisciplinary Computational Simulations

    Get PDF
    Multidisciplinary computational simulations involve interactions between distributed applications, datasets, products, resources, and users. Because the very nature of the simulation software emphasizes a single-computer, small-usership and audience, the kinds of applications that have been developed often are unfriendly to incorporation into a distributed model. However, advances in networking infrastructure, and the natural tendency for information to be geographically distributed place strong requirements on integration of single-computer codes with distributed information sources, as well as multiple computer codes that are geographically distributed in their execution. The hypothesis of this dissertation is that it is possible, via novel integration of Internet, Distributed Computing, and Grid technologies, to create a distributed computational simulation systems that satisfies the requirements of modern multidisciplinary computational simulation systems without compromising functionality, performance, or security of existing applications. Furthermore, such a system would integrate disparate applications, resources, and users and would improve the productivity of users by providing new functionality not currently available. The hypothesis is proved constructively by first prototyping the Enterprise Computational Services framework based on a multi-tier architecture using the Java 2 Enterprise Edition platform and Web Services and then two distributed systems, the Distributed Marine Environment Forecast System and Distributed Simulation System for Seismic Performance of Urban Regions, are prototyped using this enabling framework. Several interfaces to the framework are prototyped to illustrate that the same framework can be used to develop multiple front-end clients required to support different types of users within a given computational domain. The two domain specific distributed environments prototyped using the framework illustrate that the framework provides a reusable common infrastructure irrespective of the computational domain. The effectiveness and utility of the distributed system and the framework are demonstrated by using a representative collection of computational simulations. Additional benefits provided by the distributed systems in terms of new functionality provided are evaluated to determine the impact on user productivity. The key contribution of this dissertation is a reusable infrastructure that could evolve to meet the requirements of next-generation hardware and software architectures while supporting interaction between a diverse set of users and distributed computational resources and multidisciplinary applications
    corecore