578 research outputs found

    Web service composition: A survey of techniques and tools

    Get PDF
    Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective

    TABSAOND: A technique for developing agent-based simulation apps and online tools with nondeterministic decisions

    Get PDF
    Agent-based simulators (ABSs) have successfully allowed practitioners to estimate the outcomes of certain input circumstances in several domains. Although some techniques and processes provide hints about the construction of these systems, some aspects have not been discussed yet in the literature. In this context, the current approach presents a technique for developing ABSs. Its focus is to guide practitioners in designing and implementing the decision-making processes of agents in nondeterministic scenarios. As an additional technological innovation, the ABSs are deployed as both mobile apps and online tools. This work illustrates the current approach with two case studies in the fields of (a) health and welfare and (b) tourism. These case studies have also been developed with the most similar technique from the literature for comparing both techniques. The presented technique improved the simulated outcomes in terms of their similarity with the real ones. The obtained ABSs were more efficient and reliable for large amounts of agents (e.g. 10,000 – 400,000 agents). The development time was lower. Both the framework and the implementation of a case study are freely distributed as open-source to facilitate the reproducibility of the experiments and to assist practitioners in applying the current approach

    Enabling 5G Technologies

    Get PDF
    The increasing demand for connectivity and broadband wireless access is leading to the fifth generation (5G) of cellular networks. The overall scope of 5G is greater in client width and diversity than in previous generations, requiring substantial changes to network topologies and air interfaces. This divergence from existing network designs is prompting a massive growth in research, with the U.S. government alone investing $400 million in advanced wireless technologies. 5G is projected to enable the connectivity of 20 billion devices by 2020, and dominate such areas as vehicular networking and the Internet of Things. However, many challenges exist to enable large scale deployment and general adoption of the cellular industries. In this dissertation, we propose three new additions to the literature to further the progression 5G development. These additions approach 5G from top down and bottom up perspectives considering interference modeling and physical layer prototyping. Heterogeneous deployments are considered from a purely analytical perspective, modeling co-channel interference between and among both macrocell and femtocell tiers. We further enhance these models with parameterized directional antennas and integrate them into a novel mixed point process study of the network. At the air interface, we examine Software-Defined Radio (SDR) development of physical link level simulations. First, we introduce a new algorithm acceleration framework for MATLAB, enabling real-time and concurrent applications. Extensible beyond SDR alone, this dataflow framework can provide application speedup for stream-based or data dependent processing. Furthermore, using SDRs we develop a localization testbed for dense deployments of 5G smallcells. Providing real-time tracking of targets using foundational direction of arrival estimation techniques, including a new OFDM based correlation implementation

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Parallel programming paradigms and frameworks in big data era

    Get PDF
    With Cloud Computing emerging as a promising new approach for ad-hoc parallel data processing, major companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. We have entered the Era of Big Data. The explosion and profusion of available data in a wide range of application domains rise up new challenges and opportunities in a plethora of disciplines-ranging from science and engineering to biology and business. One major challenge is how to take advantage of the unprecedented scale of data-typically of heterogeneous nature-in order to acquire further insights and knowledge for improving the quality of the offered services. To exploit this new resource, we need to scale up and scale out both our infrastructures and standard techniques. Our society is already data-rich, but the question remains whether or not we have the conceptual tools to handle it. In this paper we discuss and analyze opportunities and challenges for efficient parallel data processing. Big Data is the next frontier for innovation, competition, and productivity, and many solutions continue to appear, partly supported by the considerable enthusiasm around the MapReduce paradigm for large-scale data analysis. We review various parallel and distributed programming paradigms, analyzing how they fit into the Big Data era, and present modern emerging paradigms and frameworks. To better support practitioners interesting in this domain, we end with an analysis of on-going research challenges towards the truly fourth generation data-intensive science.Peer ReviewedPostprint (author's final draft
    • …
    corecore