614 research outputs found

    Middleware and Services for Dynamic Adaptive Neural Network Arrays

    Get PDF
    Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic systems that exhibit spiking behaviors and can be designed using evolutionary optimization. Array elements are rapidly reconfigurable and can function as either neurons or synapses with programmable interconnections and parameters. Visualization applications can examine DANNA element connections, parameters, and functionality, and evolutionary optimization applications can utilize DANNA to speedup neural network simulations. To facilitate interactions with DANNAs from these applications, we have developed a language-agnostic application programming interface (API) that abstracts away low-level communication details with a DANNA and provides a high-level interface for reprogramming and controlling a DANNA. The library has also been designed in modules in order to adapt to future changes in the design of DANNA, including changes to the DANNA element design, DANNA communication protocol, and connection. In addition to communicating with DANNAs, it is also beneficial for applications to store networks with known functionality. Hence, a Representational State Transfer (REST) API with a MongoDB database back-end has been developed to encourage the collection and exploration of networks

    An OpenCL software compilation framework targeting an SoC-FPGA VLIW chip multiprocessor

    Get PDF
    Modern systems-on-chip augment their baseline CPU with coprocessors and accelerators to increase overall computational capability and power efficiency, and thus have evolved into heterogeneous multi-core systems. Several languages have been developed to enable this paradigm shift, including CUDA and OpenCL. This paper discusses a unified compilation environment to enable heterogeneous system design through the use of OpenCL and a highly configurable VLIW Chip Multiprocessor architecture known as the LE1. An LLVM compilation framework was researched and a prototype developed to enable the execution of OpenCL applications on a number of hardware configurations of the LE1 CMP. The presented OpenCL framework fully automates the compilation flow and supports work-item coalescing which better maps onto the ILP processor cores of the LE1 architecture. This paper discusses in detail both the software stack and target hardware architecture and evaluates the scalability of the proposed framework by running 12 industry-standard OpenCL benchmarks drawn from the AMD SDK and the Rodinia suites. The benchmarks are executed on 40 LE1 configurations with 10 implemented on an SoC-FPGA and the remaining on a cycle-accurate simulator. Across 12 OpenCL benchmarks results demonstrate near-linear wall-clock performance improvement of 1.8x (using 2 dual-issue cores), up to 5.2x (using 8 dual-issue cores) and on one case, super-linear improvement of 8.4x (FixOffset kernel, 8 dual-issue cores). The number of OpenCL benchmarks evaluated makes this study one of the most complete in the literature

    Amazon Web Services, the Lacanian Unconscious, and Digital Life

    Get PDF
    In late 2011, ex-Amazon developer Steve Yegge’s rant about his former company described Amazon’s rapid transformation from an online bookstore to a web-services entity with a ruthlessly unified platform, all guided by the idea that the company’s effort to streamline its internal efficiency could be monetized, and the resultant software products sold through Amazon Web Services. The media consumerism that fed Amazon’s early years funded a surveilling behemoth, one that everyone feared Microsoft would become. As such, AWS has become a manifestation of the internet’s Lacanian unconscious (even providing the services and hosting for Reddit), structured around the optimization of Amazon’s business model, built line by line with the labor of easily discarded programmers. In this article, we shall examine the subtle and far-reaching effects of Amazon Web Services platform on the Amazon storefront, “cloud services,” and social media, as well as the origins of AWS in theories of programming grounded in neural network theory and “artificial life,” as opposed to AI. In the end, AWS will be shown to be its own unique entity, a platform infinitely extensible, inexhaustible, and a monument to the circumlocutions of cybernetic capital

    Cloud Computing Trace Characterization and Synthetic Workload Generation

    Get PDF
    This thesis researches cloud computing workload characteristics and synthetic workload generation. A heuristic presented in the work guides the process of workload trace characterization and synthetic workload generation. Analysis of a cloud trace provides insight into client request behaviors and statistical parameters. A versatile workload generation tool creates client connections, controls request rates, defines number of jobs, produces tasks within each job, and manages task durations. The test system consists of multiple clients creating workloads and a server receiving request, all contained within a virtual machine environment. Statistical analysis verifies the synthetic workload experimental results are consistent with real workload behaviors and characteristics
    • …
    corecore