1,082,388 research outputs found

    Next challenges for adaptive learning systems

    Get PDF
    Learning from evolving streaming data has become a 'hot' research topic in the last decade and many adaptive learning algorithms have been developed. This research was stimulated by rapidly growing amounts of industrial, transactional, sensor and other business data that arrives in real time and needs to be mined in real time. Under such circumstances, constant manual adjustment of models is in-efficient and with increasing amounts of data is becoming infeasible. Nevertheless, adaptive learning models are still rarely employed in business applications in practice. In the light of rapidly growing structurally rich 'big data', new generation of parallel computing solutions and cloud computing services as well as recent advances in portable computing devices, this article aims to identify the current key research directions to be taken to bring the adaptive learning closer to application needs. We identify six forthcoming challenges in designing and building adaptive learning (pre-diction) systems: making adaptive systems scalable, dealing with realistic data, improving usability and trust, integrat-ing expert knowledge, taking into account various application needs, and moving from adaptive algorithms towards adaptive tools. Those challenges are critical for the evolving stream settings, as the process of model building needs to be fully automated and continuous.</jats:p

    Using a desktop grid to support simulation modelling

    Get PDF
    Simulation is characterized by the need to run multiple sets of computationally intensive experiments. We argue that Grid computing can reduce the overall execution time of such experiments by tapping into the typically underutilized network of departmental desktop PCs, collectively known as desktop grids. Commercial-off-the-shelf simulation packages (CSPs) are used in industry to simulate models. To investigate if Grid computing can benefit simulation, this paper introduces our desktop grid, WinGrid, and discusses how this can be used to support the processing needs of CSPs. Results indicate a linear speed up and that Grid computing does indeed hold promise for simulation

    Performance Analysis of Publish/Subscribe Systems

    Full text link
    The Desktop Grid offers solutions to overcome several challenges and to answer increasingly needs of scientific computing. Its technology consists mainly in exploiting resources, geographically dispersed, to treat complex applications needing big power of calculation and/or important storage capacity. However, as resources number increases, the need for scalability, self-organisation, dynamic reconfigurations, decentralisation and performance becomes more and more essential. Since such properties are exhibited by P2P systems, the convergence of grid computing and P2P computing seems natural. In this context, this paper evaluates the scalability and performance of P2P tools for discovering and registering services. Three protocols are used for this purpose: Bonjour, Avahi and Free-Pastry. We have studied the behaviour of theses protocols related to two criteria: the elapsed time for registrations services and the needed time to discover new services. Our aim is to analyse these results in order to choose the best protocol we can use in order to create a decentralised middleware for desktop grid

    Distributed Exact Shortest Paths in Sublinear Time

    Full text link
    The distributed single-source shortest paths problem is one of the most fundamental and central problems in the message-passing distributed computing. Classical Bellman-Ford algorithm solves it in O(n)O(n) time, where nn is the number of vertices in the input graph GG. Peleg and Rubinovich (FOCS'99) showed a lower bound of Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) for this problem, where DD is the hop-diameter of GG. Whether or not this problem can be solved in o(n)o(n) time when DD is relatively small is a major notorious open question. Despite intensive research \cite{LP13,N14,HKN15,EN16,BKKL16} that yielded near-optimal algorithms for the approximate variant of this problem, no progress was reported for the original problem. In this paper we answer this question in the affirmative. We devise an algorithm that requires O((nlogn)5/6)O((n \log n)^{5/6}) time, for D=O(nlogn)D = O(\sqrt{n \log n}), and O(D1/3(nlogn)2/3)O(D^{1/3} \cdot (n \log n)^{2/3}) time, for larger DD. This running time is sublinear in nn in almost the entire range of parameters, specifically, for D=o(n/log2n)D = o(n/\log^2 n). For the all-pairs shortest paths problem, our algorithm requires O(n5/3log2/3n)O(n^{5/3} \log^{2/3} n) time, regardless of the value of DD. We also devise the first algorithm with non-trivial complexity guarantees for computing exact shortest paths in the multipass semi-streaming model of computation. From the technical viewpoint, our algorithm computes a hopset G"G" of a skeleton graph GG' of GG without first computing GG' itself. We then conduct a Bellman-Ford exploration in GG"G' \cup G", while computing the required edges of GG' on the fly. As a result, our algorithm computes exactly those edges of GG' that it really needs, rather than computing approximately the entire GG'

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
    corecore