439,715 research outputs found

    The Informatics of the Equity Markets - A Collaborative Approach

    Get PDF
    This paper aims to provide a high-level overview upon the information technology that supports the electronic transactions performed on the equity markets. It is meant to offer a succinct introduction to the various technologies tailored to tackle the data transfer between the participants on an equity market, the architectural approaches regarding trading system design, and the communication in a collaborative distributed computing environment. Our intention here is not to provide solutions, or to propose definitive designs, merely to scratch the surface of this vast domain, and open the path for subsequent researches.securities exchange, stock order flow, trading system architecture, distributed computing, middleware, collaborative system, order-matching algorithm

    Dynamic Control Flow in Large-Scale Machine Learning

    Full text link
    Many recent machine learning models rely on fine-grained dynamic control flow for training and inference. In particular, models based on recurrent neural networks and on reinforcement learning depend on recurrence relations, data-dependent conditional execution, and other features that call for dynamic control flow. These applications benefit from the ability to make rapid control-flow decisions across a set of computing devices in a distributed system. For performance, scalability, and expressiveness, a machine learning system must support dynamic control flow in distributed and heterogeneous environments. This paper presents a programming model for distributed machine learning that supports dynamic control flow. We describe the design of the programming model, and its implementation in TensorFlow, a distributed machine learning system. Our approach extends the use of dataflow graphs to represent machine learning models, offering several distinctive features. First, the branches of conditionals and bodies of loops can be partitioned across many machines to run on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs. Second, programs written in our model support automatic differentiation and distributed gradient computations, which are necessary for training machine learning models that use control flow. Third, our choice of non-strict semantics enables multiple loop iterations to execute in parallel across machines, and to overlap compute and I/O operations. We have done our work in the context of TensorFlow, and it has been used extensively in research and production. We evaluate it using several real-world applications, and demonstrate its performance and scalability.Comment: Appeared in EuroSys 2018. 14 pages, 16 figure

    Distributed Service Broker Policy Algorithm for Logistics over Cloud

    Get PDF
    Logistics information system focuses on flow of information with storage and services of goods supply from the origin point to consumption point of organization. Logistics information system makes this flow more efficient with the help of cloud. Cloud computing manages the logistics information system centrally. The centralized data center keeps the track of information distribution which creates network congestion and overloading on data center when various requests of users from different regions occur at same time. So, the data center needs to be maintained effectively for better performance. This paper presents the distributed service broker policy to implement centralized data center and proposes distributed data center for logistics information system over cloud. This paper also presents the result of distributed service broker policy algorithm to reduce network congestion, higher latency and cost due to large number of demand of particular service in distributed data center for logistics

    Four-dimensional dynamic flow measurement by holographic particle image velocimetry

    Get PDF
    The ultimate goal of holographic particle image velocimetry (HPIV) is to provide space- and time-resolved measurement of complex flows. Recent new understanding of holographic imaging of small particles, pertaining to intrinsic aberration and noise in particular, has enabled us to elucidate fundamental issues in HPIV and implement a new HPIV system. This system is based on our previously reported off-axis HPIV setup, but the design is optimized by incorporating our new insights of holographic particle imaging characteristics. Furthermore, the new system benefits from advanced data processing algorithms and distributed parallel computing technology. Because of its robustness and efficiency, for the first time to our knowledge, the goal of both temporally and spatially resolved flow measurements becomes tangible. We demonstrate its temporal measurement capability by a series of phase-locked dynamic measurements of instantaneous three-dimensional, three-component velocity fields in a highly three-dimensional vortical flow--the flow past a tab

    A review on orchestration distributed systems for IoT smart services in fog computing

    Get PDF
    This paper provides a review of orchestration distributed systems for IoT smart services in fog computing. The cloud infrastructure alone cannot handle the flow of information with the abundance of data, devices and interactions. Thus, fog computing becomes a new paradigm to overcome the problem. One of the first challenges was to build the orchestration systems to activate the clouds and to execute tasks throughout the whole system that has to be considered to the situation in the large scale of geographical distance, heterogeneity and low latency to support the limitation of cloud computing. Some problems exist for orchestration distributed in fog computing are to fulfil with high reliability and low-delay requirements in the IoT applications system and to form a larger computer network like a fog network, at different geographic sites. This paper reviewed approximately 68 articles on orchestration distributed system for fog computing. The result shows the orchestration distribute system and some of the evaluation criteria for fog computing that have been compared in terms of Borg, Kubernetes, Swarm, Mesos, Aurora, heterogeneity, QoS management, scalability, mobility, federation, and interoperability. The significance of this study is to support the researcher in developing orchestration distributed systems for IoT smart services in fog computing focus on IR4.0 national agend

    ATAMM analysis tool

    Get PDF
    Diagnostics software for analyzing Algorithm to Architecture Mapping Model (ATAMM) based concurrent processing systems is presented. ATAMM is capable of modeling the execution of large grain algorithms on distributed data flow architectures. The tool graphically displays algorithm activities and processor activities for evaluation of the behavior and performance of an ATAMM based system. The tool's measurement capabilities indicate computing speed, throughput, concurrency, resource utilization, and overhead. Evaluations are performed on a simulated system using the software tool. The tool is used to estimate theoretical lower bound performance. Analysis results are shown to be comparable to the predictions

    Metascheduling of HPC Jobs in Day-Ahead Electricity Markets

    Full text link
    High performance grid computing is a key enabler of large scale collaborative computational science. With the promise of exascale computing, high performance grid systems are expected to incur electricity bills that grow super-linearly over time. In order to achieve cost effectiveness in these systems, it is essential for the scheduling algorithms to exploit electricity price variations, both in space and time, that are prevalent in the dynamic electricity price markets. In this paper, we present a metascheduling algorithm to optimize the placement of jobs in a compute grid which consumes electricity from the day-ahead wholesale market. We formulate the scheduling problem as a Minimum Cost Maximum Flow problem and leverage queue waiting time and electricity price predictions to accurately estimate the cost of job execution at a system. Using trace based simulation with real and synthetic workload traces, and real electricity price data sets, we demonstrate our approach on two currently operational grids, XSEDE and NorduGrid. Our experimental setup collectively constitute more than 433K processors spread across 58 compute systems in 17 geographically distributed locations. Experiments show that our approach simultaneously optimizes the total electricity cost and the average response time of the grid, without being unfair to users of the local batch systems.Comment: Appears in IEEE Transactions on Parallel and Distributed System

    Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform

    Full text link
    This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.Comment: 9 page
    corecore