58,119 research outputs found

    TensorLayer: A Versatile Library for Efficient Deep Learning Development

    Full text link
    Deep learning has enabled major advances in the fields of computer vision, natural language processing, and multimedia among many others. Developing a deep learning system is arduous and complex, as it involves constructing neural network architectures, managing training/trained models, tuning optimization process, preprocessing and organizing data, etc. TensorLayer is a versatile Python library that aims at helping researchers and engineers efficiently develop deep learning systems. It offers rich abstractions for neural networks, model and data management, and parallel workflow mechanism. While boosting efficiency, TensorLayer maintains both performance and scalability. TensorLayer was released in September 2016 on GitHub, and has helped people from academia and industry develop real-world applications of deep learning.Comment: ACM Multimedia 201

    Specification of multiparty audio and video interaction based on the Reference Model of Open Distributed Processing

    Get PDF
    The Reference Model of Open Distributed Processing (RM-ODP) is an emerging ISO/ITU-T standard. It provides a framework of abstractions based on viewpoints, and it defines five viewpoint languages to model open distributed systems. This paper uses the viewpoint languages to specify multiparty audio/video exchange in distributed systems. To the designers of distributed systems, it shows how the concepts and rules of RM-ODP can be applied.\ud \ud The ODP ¿binding object¿ is an important concept to model continuous data flows in distributed systems. We take this concept as a basis for multiparty audio and video flow exchanges, and we provide five ODP viewpoint specifications, each emphasising a particular concern. To ensure overall correctness, special attention is paid to the mapping between the ODP viewpoint specifications

    SPIDER: Fault Resilient SDN Pipeline with Recovery Delay Guarantees

    Full text link
    When dealing with node or link failures in Software Defined Networking (SDN), the network capability to establish an alternative path depends on controller reachability and on the round trip times (RTTs) between controller and involved switches. Moreover, current SDN data plane abstractions for failure detection (e.g. OpenFlow "Fast-failover") do not allow programmers to tweak switches' detection mechanism, thus leaving SDN operators still relying on proprietary management interfaces (when available) to achieve guaranteed detection and recovery delays. We propose SPIDER, an OpenFlow-like pipeline design that provides i) a detection mechanism based on switches' periodic link probing and ii) fast reroute of traffic flows even in case of distant failures, regardless of controller availability. SPIDER can be implemented using stateful data plane abstractions such as OpenState or Open vSwitch, and it offers guaranteed short (i.e. ms) failure detection and recovery delays, with a configurable trade off between overhead and failover responsiveness. We present here the SPIDER pipeline design, behavioral model, and analysis on flow tables' memory impact. We also implemented and experimentally validated SPIDER using OpenState (an OpenFlow 1.3 extension for stateful packet processing), showing numerical results on its performance in terms of recovery latency and packet losses.Comment: 8 page

    A Modeling Approach based on UML/MARTE for GPU Architecture

    Get PDF
    Nowadays, the High Performance Computing is part of the context of embedded systems. Graphics Processing Units (GPUs) are more and more used in acceleration of the most part of algorithms and applications. Over the past years, not many efforts have been done to describe abstractions of applications in relation to their target architectures. Thus, when developers need to associate applications and GPUs, for example, they find difficulty and prefer using API for these architectures. This paper presents a metamodel extension for MARTE profile and a model for GPU architectures. The main goal is to specify the task and data allocation in the memory hierarchy of these architectures. The results show that this approach will help to generate code for GPUs based on model transformations using Model Driven Engineering (MDE).Comment: Symposium en Architectures nouvelles de machines (SympA'14) (2011

    Віртуальна реальність як філософська проблема

    Get PDF
    Data analysis and predictive analytics today are driven by large scale dis- tributed deployments of complex pipelines, guiding data cleaning, model training and evaluation. A wide range of systems and tools provide the basic abstractions for building such complex pipelines for offline data processing, however, there is an increasing demand for providing support for incremental models over unbounded streaming data. In this work, we focus on the prob- lem of modelling such a pipeline framework and providing algorithms that build on top of basic abstractions, fundamental to stream processing. We design a streaming machine learning pipeline as a series of stages such as model building, concept drift detection and continuous evaluation. We build our prototype on Apache Flink, a distributed data processing system with streaming capabilities along with a state-of-the-art implementation of a varia- tion of Vertical Hoeffding Tree (VHT), a distributed decision tree classification algorithm as a proof of concept. Furthermore, we compare our version of VHT with the current state-of- the-art implementations on distributed data processing systems in terms of performance and accuracy. Our experimental results on real-world data sets show significant performance benefits of our pipeline while maintaining low classification error. We believe that this pipeline framework can offer a good baseline for a full-fledged implementation of various streaming algorithms that can work in parallel.Dataanalys och predictive analytics drivs idag av storskaliga distribuerade distributioner av komplexa pipelines, guiding data cleaning, model training och utvärdering. Ett brett utbud av system och verktyg ger endast grundläggande abstractions (struktur) för att bygga sådana komplexa pipelines för databehandling i off-line läge, men det finns en ökande efterfrågan att tillhandahålla stöd för stegvis modell över unbounded streaming data. I detta arbete fokuserar vi på problemet med modellering som ramverket för pipeline och ger algoritmer som bygger på grundläggande abstraktioner för stream processing. Vi konstruerar en streaming maskininlärnings pipeline som innehåller steg som model building, concept drift detection och kontinuerlig utvärdering. Vi bygger vår prototyp på Apache Flink, ett distribuerat databehandlingssystem med strömnings kapacitet tillsammans med den bästa tillgängliga implementation av en Vertical Hoeffding Tree (VHT) variant och ett distribuerat beslutsträd algoritm som koncepttest. Dessutom jämför vi vår version av VHT med den senaste tekniken inom destributed data processing systems i termer av prestanda och precision. Vårt experimentella resultaten visar betydande fördelarna med vår pipeline och samtidigt bibehållen låg klassificerat felet. Vi anser att detta ramverk kan erbjuda en bra utgångspunkt vid genomförandet av olika streaming algoritmer som kan arbeta parallellt

    Chemistry-Inspired Adaptive Stream Processing

    Get PDF
    International audienceStream processing engines have appeared as the next generation of data processing systems, facing the needs for low-delay processing. While these systems have been widely studied recently, their ability to adapt their processing logics at run time upon the detection of some events calling for adaptation is still an open issue. Chemistry-inspired models of computation have been shown to ease the specification of adaptive systems. In this paper, we argue that a higher-order chemical model can be used to specify such an adaptive SPE in a natural way. We also show how such programming abstractions can get enacted in a decentralised environment

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure
    corecore