2,584 research outputs found

    A CASE STUDY OF VARIOUS WIRELESS NETWORK SIMULATION TOOLS

    Get PDF
    4G is the fastest developing system in the history of mobile communication networks. Network connectivity is paramount for all kinds of big enterprises.  4G not only provides super-fast connectivity to millions of users, but can also act as an enterprise network connectivity enabler and it has inherent advantages such as higher bandwidth, low latency, higher spectrum efficiency along with backward compatibility and future proofing. The design of the 4G based Long Term Evolution physical network provides the required flexibility for optimization during the development phase. In this paper LTE Network related supporting simulation tools is presented to demonstrate the need of Hardware co-simulation of the LTE system. After the feasibility analysis, the importance of the model is to be ported Field Programmable Gate Array platform is examined in survey in detail with the supporting inferences along with the comparison of different wireless network simulators suitable for LTE

    Development of Real Time Operating Systen for PIC18F Microcontrollers for Educational Purposes

    Get PDF
    Real Time Operating System (RTOS) is a small operating system designed to manage the peripherals of Microcontrollers and exhibit a low level layer to enhance the parallel execution of multiple programs. In addition to that, RTOSes are most of concern about guarantee the processing at real time. This project aims to implement and develop RTOS on PIC18Fxxx family. This RTOS is to be developed under MPLAB IDE integrated development environment. The kernel of this RTOS is written in Assembly language while the users may use both assembly and C to develop their applications. A previous RTOS project called PICos18 developed by Pragamtec inc. is being considered. The selection of this system is due to its free license and the availability of its documentations. PICos18 is based on OSEK/VDX (German/French industrial standards for operating systems). The main contribution in this project is first, by developing RTOS to review and demonstrate the concept of RTOS and secondly, by developing drivers and application compatible with the developed RTOS and finally presenting the developed RTOS in educational form for future use as a teaching tool in microcontroller-based courses

    Investigation of a simultaneous multithreaded architecture

    Get PDF
    Many enhancements have been made to the traditional general purpose load-store computer architectures. Among the enhancements are memory hierarchy improvements, branch prediction, and multiple issue processors. A major problem that exists with current microprocessor design is the disparity in the much larger increase in speed of the CPU versus the moderate increase in speed accessing main memory. The simultaneous multithreaded architecture is an extension of the single-threaded architecture that helps hide the performance penalty created by long-latency instructions, branch mispredictions, and memory accesses. Simultaneous multithreaded architectures use a more flexible parallelism, which takes advantage of both instruction-level, and thread-level parallelism. The goal of this project was to design, simulate, and analyze a model of a simultaneous multithreaded architecture in order to evaluate design alternatives. The simulator was created by modifying a version of the Simple Scalar toolset, developed at the University of Wisconsin. The simulations provide documentation for an overall system performance improvement of a simulta neous multithreaded architecture. In early simulation results, performed with the same number of functional units, an improvement in the number of instructions per cycle (IPC) of between 43% and 58% was found using four threads versus a single thread. The horizontal waste rate, which measures the number of unused issue slots, was reduced between 35% and 46%. The vertical waste rate, which measures the percentage- of unused issue cycles (no issue slots used in a cycle), was reduced between 46% and 61%. These results are derived from a set of four sample programs. It was also found that increasing the number of certain functional units did not improve performance, whereas increasing the number of other types of functional units did have a significant positive impact on performance

    Space benefits: The secondary application of aerospace technology in other sectors of the economy

    Get PDF
    Benefit cases of aerospace technology utilization are presented for manufacturing, transportation, utilities, and health. General, organization, geographic, and field center indexes are included

    The Glasgow raspberry pi cloud: a scale model for cloud computing infrastructures

    Get PDF
    Data Centers (DC) used to support Cloud services often consist of tens of thousands of networked machines under a single roof. The significant capital outlay required to replicate such infrastructures constitutes a major obstacle to practical implementation and evaluation of research in this domain. Currently, most research into Cloud computing relies on either limited software simulation, or the use of a testbed environments with a handful of machines. The recent introduction of the Raspberry Pi, a low-cost, low-power single-board computer, has made the construction of a miniature Cloud DCs more affordable. In this paper, we present the Glasgow Raspberry Pi Cloud (PiCloud), a scale model of a DC composed of clusters of Raspberry Pi devices. The PiCloud emulates every layer of a Cloud stack, ranging from resource virtualisation to network behaviour, providing a full-featured Cloud Computing research and educational environment

    Integrated Design and Implementation of Embedded Control Systems with Scilab

    Get PDF
    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.Comment: 15 pages, 14 figures; Open Access at http://www.mdpi.org/sensors/papers/s8095501.pd

    Simple out of order core for GPGPUs

    Get PDF
    GPU architectures have become popular for executing general-purpose programs which rely on having a large number of threads that run concurrently to hide the latency among dependent instructions. This approach has an important cost/overhead in terms of low data locality due to the increased pressure on the memory hierarchy of the many threads being run concurrently and the extra cost of storing and managing the on-chip state of those many threads. This paper presents SOCGPU (Simple Out-of-order Core for GPU), a simple out-of-order execution mechanism that does not require register renaming nor scoreboards. It uses a small Instruction Buffer and a tiny Dependence matrix to keep track of dependencies among instructions and avoid data hazards. Evaluations for an Nvidia Tesla V100-like GPU show that SOCGPU provides a speed-up of up to 2.3 in some machine learning programs and 1.38 on average for a variety of benchmarks, while it reduces energy consumption by 6.5%, with only 2.4% area overhead.This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant No 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00, and the ICREA Academia program.Peer ReviewedPostprint (author's final draft

    Tiny x86 - Architecture Simulator for Educational Purposes

    Get PDF
    Tato práce prezentuje tiny x86 architekturu a virtuální stroj, určené jako pomocný nástroj studentům k porozumění technikám kompilování a jejich dopad na výkon programu. V porovnání s již existujícími instrukčními sadami je tiny x86 jednodušší na použití, protože oproti binárnímu kódování nabízí aplikační rozhraní v jazyce C++ a nelimituje se na jeden návrh (jsou podporovány prvky CISC i RISC architektury). Prezentovaný virtuální stroj nabízí rozsáhle možnosti konfigurace, dovolující z(ne)výraznit různé návrhové prvky (počet registrů, odezvu paměti, trvání instrukcí atd.). Virtuální stroj je již nasazen v předmětu NI-GEN (generování kódu) na FIT ČVUT, kde jeho jednoduchost dovoluje studentům během semestru psát kompletní kompilátor.This thesis presents tiny x86 architecture and virtual machine designed to help students understand various compiler techniques and their effect on the program performance. Compared to existing instruction set architectures, tiny x86 is simpler, easier to use as it comes with a C++ API as opposed to binary encodings and does not limit itself to single design principles (both CISC and RISC features are supported). The VM also offers extensive configuration options, allowing it to (de-)emphasize various architecture features (register pressure, memory latency, instruction timings, etc.). The VM is already used in the NI-GEN (Code Generation) course at FIT CTU, where its simplicity allows the students to write full compiler pipeline during the term

    Compass: A Decentralized Scheduler for Latency-Sensitive ML Workflows

    Full text link
    We consider ML query processing in distributed systems where GPU-enabled workers coordinate to execute complex queries: a computing style often seen in applications that interact with users in support of image processing and natural language processing. In such systems, coscheduling of GPU memory management and task placement represents a promising opportunity. We propose Compass, a novel framework that unifies these functions to reduce job latency while using resources efficiently, placing tasks where data dependencies will be satisfied, collocating tasks from the same job (when this will not overload the host or its GPU), and efficiently managing GPU memory. Comparison with other state of the art schedulers shows a significant reduction in completion times while requiring the same amount or even fewer resources. In one case, just half the servers were needed for processing the same workload
    • …
    corecore