3,517 research outputs found

    A Switch Architecture for Real-Time Multimedia Communications

    Get PDF
    In this paper we present a switch that can be used to transfer multimedia type of trafJic. The switch provides a guaranteed throughput and a bounded latency. We focus on the design of a prototype Switching Element using the new technology opportunities being offered today. The architecture meets the multimedia requirements but still has a low complexity and needs a minimum amount of hardware. A main item of this paper will be the background of the architectural design decisions made. These include the interconnection topology, buffer organization, routing and scheduling. The implementation of the switching fabric with FPGAs, allows us to experiment with switching mode, routing strategy and scheduling policy in a multimedia environment. The witching elements are interconnected in a Kautz topology. Kautz graphs have interesting properties such as: a small diametec the degree is independent of the network size, the network is fault-tolerant and has a simple routing algorithm

    Enhancing Performance of Parallel Self-Organizing Map on Large Dataset with Dynamic Parallel and Hyper-Q

    Get PDF
    Self-Organizing Map (SOM) is an unsupervised artificial neural network algorithm. Even though this algorithm is known to be an appealing clustering method,many efforts to improve its performance are still pursued in various research works. In order to gain faster computation time, for instance, running SOM in parallel had been focused in many previous research works. Utilization of the Graphics Processing Unit (GPU) as a parallel calculation engine is also continuously improved. However, total computation time in parallel SOM is still not optimal on processing large dataset. In this research, we propose a combination of Dynamic Parallel and Hyper-Q to further improve the performance of parallel SOM in terms of faster computing time. Dynamic Parallel and Hyper-Q are utilized on the process of calculating distance and searching best-matching unit (BMU), while updating weight and its neighbors are performed using Hyper-Q only. Result of this study indicates an increase in SOM parallel performance up to two times faster compared to those without using Dynamic Parallel and Hyper-Q

    Enhanced parallel SOM based on heterogeneous system platform

    Get PDF
    In this paper, we propose an enhanced parallel Self organizing Map (SOM) framework based on heterogeneous system platform, specifically Central Processing Unit (CPU) and Graphic Processing Unit (GPU) soldered together on a single chip.The framework is to improve speed of parallel SOM using GPU since processing parallel SOM on GPU burden by communication latency due to isolate device architecture with CPU.The parallel SOM has been extended to heterogeneous system platform and double kernel for calculation distance and find Best Matching Unit (BMU) are introduced.The results are tested using benchmark data on two different platforms: GPU and heterogeneous system. The proposed framework shows improvement compared to standard parallel SOM on GPU and heterogeneous system

    A Survey of Techniques for Architecting TLBs

    Get PDF
    “Translation lookaside buffer” (TLB) caches virtual to physical address translation information and is used in systems ranging from embedded devices to high-end servers. Since TLB is accessed very frequently and a TLB miss is extremely costly, prudent management of TLB is important for improving performance and energy efficiency of processors. In this paper, we present a survey of techniques for architecting and managing TLBs. We characterize the techniques across several dimensions to highlight their similarities and distinctions. We believe that this paper will be useful for chip designers, computer architects and system engineers

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    A characterization of parallel systems

    Get PDF
    technical reporta taxonomy for parallel processing systems is presented which has some advantages over previous taxonomies. The taxonomy characterizes parallel processing systems using four parameters: topology, communication, granularity, and operation. These parameters and used repetitively in a hierarchical fashion to produce a taxonomic structure which is extensible to the level of detail desired. Topology describes the structure of the priniciple interconnections. Communication describes the flow of data and programs through the system. Granularity describes the size of the largest repeated element, or grain. Operation describes the important functional properties of each grain, especially the ratio of storage to logic circuitry. Granularity and topology are structural parameters, while operation and communication are functional parameters which describe the behavior of the system components. A final section of this paper includes examples of the application of the taxonomy to several parallel processing systems

    Towards Expressive and Versatile Visualization-as-a-Service (VaaS)

    Get PDF
    The rapid growth of data in scientific visualization has posed significant challenges to the scalability and availability of interactive visualization tools. These challenges can be largely attributed to the limitations of traditional monolithic applications in handling large datasets and accommodating multiple users or devices. To address these issues, the Visualization-as-a-Service (VaaS) architecture has emerged as a promising solution. VaaS leverages cloud-based visualization capabilities to provide on-demand and cost-effective interactive visualization. Existing VaaS has been simplistic by design with focuses on task-parallelism with single-user-per-device tasks for predetermined visualizations. This dissertation aims to extend the capabilities of VaaS by exploring data-parallel visualization services with multi-device support and hypothesis-driven explorations. By incorporating stateful information and enabling dynamic computation, VaaS\u27 performance and flexibility for various real-world applications is improved. This dissertation explores the history of monolithic and VaaS architectures, the design and implementations of 3 new VaaS applications, and a final exploration of the future of VaaS. This research contributes to the advancement of interactive scientific visualization, addressing the challenges posed by large datasets and remote collaboration scenarios
    corecore