9,336 research outputs found

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    A Case Study for Business Integration as a Service

    No full text
    This paper presents Business Integration as a Service (BIaaS) to allow two services to work together in the Cloud to achieve a streamline process. We illustrate this integration using two services; Return on Investment (ROI) Measurement as a Service (RMaaS) and Risk Analysis as a Service (RAaaS) in the case study at the University of Southampton. The case study demonstrates the cost-savings and the risk analysis achieved, so two services can work as a single service. Advanced techniques are used to demonstrate statistical services and 3D Visualisation services under the remit of RMaaS and Monte Carlo Simulation as a Service behind the design of RAaaS. Computational results are presented with their implications discussed. Different types of risks associated with Cloud adoption can be calculated easily, rapidly and accurately with the use of BIaaS. This case study confirms the benefits of BIaaS adoption, including cost reduction and improvements in efficiency and risk analysis. Implementation of BIaaS in other organisations is also discussed. Important data arising from the integration of RMaaS and RAaaS are useful for management and stakeholders of University of Southampton

    Implementation and validation of a holonic manufacturing control system

    Get PDF
    Flexible manufacturing systems are complex, stochastic environments requiring the development of innovative, intelligent control architectures that support agility and re-configurability. ADACOR holonic control system addresses this challenge by introducing an adaptive production control approach supported by the presence of supervisor entities and the self-organization capabilities associated to each ADACOR holon. The validation of the concepts proposed by ADACOR control system requires their implementation and experimental testing, to analyze their correctness, applicability and merits. This paper describes the implementation of ADACOR concepts in a flexible manufacturing system, verifies their correctness and applicability, and evaluates the ADACOR control system performance, considering not only quantitative indicators directly related to production parameters, e.g. manufacturing lead time, but also qualitative indicators, such as the agility
    • …
    corecore