34,743 research outputs found

    Towards Decision Support Technology Platform for Modular Systems

    Full text link
    The survey methodological paper addresses a glance to a general decision support platform technology for modular systems (modular/composite alterantives/solutions) in various applied domains. The decision support platform consists of seven basic combinatorial engineering frameworks (system synthesis, system modeling, evaluation, detection of bottleneck, improvement/extension, multistage design, combinatorial evolution and forecasting). The decision support platform is based on decision support procedures (e.g., multicriteria selection/sorting, clustering), combinatorial optimization problems (e.g., knapsack, multiple choice problem, clique, assignment/allocation, covering, spanning trees), and their combinations. The following is described: (1) general scheme of the decision support platform technology; (2) brief descriptions of modular (composite) systems (or composite alternatives); (3) trends in moving from chocie/selection of alternatives to processing of composite alternatives which correspond to hierarchical modular products/systems; (4) scheme of resource requirements (i.e., human, information-computer); and (5) basic combinatorial engineering frameworks and their applications in various domains.Comment: 10 pages, 9 figures, 2 table

    An Introduction to Software Engineering and Fault Tolerance

    Full text link
    This book consists of the chapters describing novel approaches to integrating fault tolerance into software development process. They cover a wide range of topics focusing on fault tolerance during the different phases of the software development, software engineering techniques for verification and validation of fault tolerance means, and languages for supporting fault tolerance specification and implementation. Accordingly, the book is structured into the following three parts: Part A: Fault tolerance engineering: from requirements to code; Part B: Verification and validation of fault tolerant systems; Part C: Languages and Tools for engineering fault tolerant systems

    On whether and how D-RISC and Microgrids can be kept relevant (self-assessment report)

    Full text link
    This report lays flat my personal views on D-RISC and Microgrids as of March 2013. It reflects the opinions and insights that I have gained from working on this project during the period 2008-2013. This report is structed in two parts: deconstruction and reconstruction. In the deconstruction phase, I review what I believe are the fundamental motivation and goals of the D-RISC/Microgrids enterprise, and identify what I judge are shortcomings: that the project did not deliver on its expectations, that fundamental questions are left unanswered, and that its original motivation may not even be relevant in scientific research any more in this day and age. In the reconstruction phase, I start by identifying the merits of the current D-RISC/Microgrids technology and know-how taken at face value, re-motivate its existence from a different angle, and suggest new, relevant research questions that could justify continued scientific investment.Comment: 45 pages, 5 figures, 2 table

    Middleware Building Blocks for Workflow Systems

    Full text link
    This paper describes a building blocks approach to the design of scientific workflow systems. We discuss RADICAL-Cybertools as one implementation of the building blocks concept, showing how they are designed and developed in accordance with this approach. This paper offers three main contributions: (i) showing the relevance of the design principles underlying the building blocks approach to support scientific workflows on high performance computing platforms; (ii) illustrating a set of building blocks that enable multiple points of integration, "unifying" conceptual reasoning across otherwise very different tools and systems; and (iii) case studies discussing how RADICAL-Cybertools are integrated with existing workflow, workload, and general purpose computing systems and used to develop domain-specific workflow systems

    (Auto)Focus approaches and their applications: A systematic review

    Full text link
    Focus, a framework for formal specification and development of interactive systems, was introduced approx. 25 years ago. Since then this approach was broadly used in academic and industrial studies, as well as provided a basis for a number of another frameworks focusing on particular domains, and for the AF3 modelling tool. In this paper we provide a literature review of the corresponding approaches, academic case studies and industrial applications of these methods

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Full text link
    The Internet is inherently a multipath network---for an underlying network with only a single path connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault-tolerance (through the use of multiple paths in backup/ redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be unmistakably multipath, including the use of multipath technology in datacenter computing; multi-interface, multi-channel, and multi-antenna trends in wireless; ubiquity of mobile devices that are multi-homed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as MP-TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely the control plane problem of how to compute and select the routes, and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Triangular Dynamic Architecture for Distributed Computing in a LAN Environment

    Full text link
    A computationally intensive large job, granulized to concurrent pieces and operating in a dynamic environment should reduce the total processing time. However, distributing jobs across a networked environment is a tedious and difficult task. Job distribution in a Local Area Network based on Triangular Dynamic Architecture (TDA) is a mechanism that establishes a dynamic environment for job distribution, load balancing and distributed processing with minimum interaction from the user. This paper introduces TDA and discusses its architecture and shows the benefits gained by utilizing such architecture in a distributed computing environment.Comment: Publishe

    Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing

    Full text link
    With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution. The resulted new inter-discipline, edge AI or edge intelligence, is beginning to receive a tremendous amount of interest. However, research on edge intelligence is still in its infancy stage, and a dedicated venue for exchanging the recent advances of edge intelligence is highly desired by both the computer system and artificial intelligence communities. To this end, we conduct a comprehensive survey of the recent research efforts on edge intelligence. Specifically, we first review the background and motivation for artificial intelligence running at the network edge. We then provide an overview of the overarching architectures, frameworks and emerging key technologies for deep learning model towards training/inference at the network edge. Finally, we discuss future research opportunities on edge intelligence. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions and inspire further research ideas on edge intelligence.Comment: Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang, "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing," Proceedings of the IEE

    Asynchronous Complex Analytics in a Distributed Dataflow Architecture

    Full text link
    Scalable distributed dataflow systems have recently experienced widespread adoption, with commodity dataflow engines such as Hadoop and Spark, and even commodity SQL engines routinely supporting increasingly sophisticated analytics tasks (e.g., support vector machines, logistic regression, collaborative filtering). However, these systems' synchronous (often Bulk Synchronous Parallel) dataflow execution model is at odds with an increasingly important trend in the machine learning community: the use of asynchrony via shared, mutable state (i.e., data races) in convex programming tasks, which has---in a single-node context---delivered noteworthy empirical performance gains and inspired new research into asynchronous algorithms. In this work, we attempt to bridge this gap by evaluating the use of lightweight, asynchronous state transfer within a commodity dataflow engine. Specifically, we investigate the use of asynchronous sideways information passing (ASIP) that presents single-stage parallel iterators with a Volcano-like intra-operator iterator that can be used for asynchronous information passing. We port two synchronous convex programming algorithms, stochastic gradient descent and the alternating direction method of multipliers (ADMM), to use ASIPs. We evaluate an implementation of ASIPs within on Apache Spark that exhibits considerable speedups as well as a rich set of performance trade-offs in the use of these asynchronous algorithms

    A Hardware-Software Blueprint for Flexible Deep Learning Specialization

    Full text link
    Specialized Deep Learning (DL) acceleration stacks, designed for a specific set of frameworks, model architectures, operators, and data types, offer the allure of high performance while sacrificing flexibility. Changes in algorithms, models, operators, or numerical systems threaten the viability of specialized hardware accelerators. We propose VTA, a programmable deep learning architecture template designed to be extensible in the face of evolving workloads. VTA achieves this flexibility via a parametrizable architecture, two-level ISA, and a JIT compiler. The two-level ISA is based on (1) a task-ISA that explicitly orchestrates concurrent compute and memory tasks and (2) a microcode-ISA which implements a wide variety of operators with single-cycle tensor-tensor operations. Next, we propose a runtime system equipped with a JIT compiler for flexible code-generation and heterogeneous execution that enables effective use of the VTA architecture. VTA is integrated and open-sourced into Apache TVM, a state-of-the-art deep learning compilation stack that provides flexibility for diverse models and divergent hardware backends. We propose a flow that performs design space exploration to generate a customized hardware architecture and software operator library that can be leveraged by mainstream learning frameworks. We demonstrate our approach by deploying optimized deep learning models used for object classification and style transfer on edge-class FPGAs.Comment: 6 pages plus references, 8 figure
    corecore