2,735 research outputs found

    The AXIOM software layers

    Get PDF
    AXIOM project aims at developing a heterogeneous computing board (SMP-FPGA).The Software Layers developed at the AXIOM project are explained.OmpSs provides an easy way to execute heterogeneous codes in multiple cores. People and objects will soon share the same digital network for information exchange in a world named as the age of the cyber-physical systems. The general expectation is that people and systems will interact in real-time. This poses pressure onto systems design to support increasing demands on computational power, while keeping a low power envelop. Additionally, modular scaling and easy programmability are also important to ensure these systems to become widespread. The whole set of expectations impose scientific and technological challenges that need to be properly addressed.The AXIOM project (Agile, eXtensible, fast I/O Module) will research new hardware/software architectures for cyber-physical systems to meet such expectations. The technical approach aims at solving fundamental problems to enable easy programmability of heterogeneous multi-core multi-board systems. AXIOM proposes the use of the task-based OmpSs programming model, leveraging low-level communication interfaces provided by the hardware. Modular scalability will be possible thanks to a fast interconnect embedded into each module. To this aim, an innovative ARM and FPGA-based board will be designed, with enhanced capabilities for interfacing with the physical world. Its effectiveness will be demonstrated with key scenarios such as Smart Video-Surveillance and Smart Living/Home (domotics).Peer ReviewedPostprint (author's final draft

    FASTCUDA: Open Source FPGA Accelerator & Hardware-Software Codesign Toolset for CUDA Kernels

    Get PDF
    Using FPGAs as hardware accelerators that communicate with a central CPU is becoming a common practice in the embedded design world but there is no standard methodology and toolset to facilitate this path yet. On the other hand, languages such as CUDA and OpenCL provide standard development environments for Graphical Processing Unit (GPU) programming. FASTCUDA is a platform that provides the necessary software toolset, hardware architecture, and design methodology to efficiently adapt the CUDA approach into a new FPGA design flow. With FASTCUDA, the CUDA kernels of a CUDA-based application are partitioned into two groups with minimal user intervention: those that are compiled and executed in parallel software, and those that are synthesized and implemented in hardware. A modern low power FPGA can provide the processing power (via numerous embedded micro-CPUs) and the logic capacity for both the software and hardware implementations of the CUDA kernels. This paper describes the system requirements and the architectural decisions behind the FASTCUDA approach

    Performance Debugging and Tuning using an Instruction-Set Simulator

    Get PDF
    Instruction-set simulators allow programmers a detailed level of insight into, and control over, the execution of a program, including parallel programs and operating systems. In principle, instruction set simulation can model any target computer and gather any statistic. Furthermore, such simulators are usually portable, independent of compiler tools, and deterministic-allowing bugs to be recreated or measurements repeated. Though often viewed as being too slow for use as a general programming tool, in the last several years their performance has improved considerably. We describe SIMICS, an instruction set simulator of SPARC-based multiprocessors developed at SICS, in its rôle as a general programming tool. We discuss some of the benefits of using a tool such as SIMICS to support various tasks in software engineering, including debugging, testing, analysis, and performance tuning. We present in some detail two test cases, where we've used SimICS to support analysis and performance tuning of two applications, Penny and EQNTOTT. This work resulted in improved parallelism in, and understanding of, Penny, as well as a performance improvement for EQNTOTT of over a magnitude. We also present some early work on analyzing SPARC/Linux, demonstrating the ability of tools like SimICS to analyze operating systems

    Distributed real-time operating system (DRTOS) modeling in SpecC

    Get PDF
    System level design of an embedded computing system involves a multi-step process to refine the system from an abstract specification to an actual implementation by defining and modeling the system at various levels of abstraction. System level design supports evaluating and optimizing the system early in design exploration.;Embedded computing systems may consist of multiple processing elements, memories, I/O devices, sensors, and actors. The selection of processing elements includes instruction-set processors and custom hardware units, such as application specific integrated circuit (ASIC) and field programmable gate array (FPGA). Real-time operating systems (RTOS) have been used in embedded systems as an industry standard for years and can offer embedded systems the characteristics such as concurrency and time constraints. Some of the existing system level design languages, such as SpecC, provide the capability to model an embedded system including an RTOS for a single processor. However, there is a need to develop a distributed RTOS modeling mechanism as part of the system level design methodology due to the increasing number of processing elements in systems and to embedded platforms having multiple processors. A distributed RTOS (DRTOS) provides services such as multiprocessor tasks scheduling, interprocess communication, synchronization, and distributed mutual exclusion, etc.;In this thesis, we develop a DRTOS model as the extension of the existing SpecC single RTOS model to provide basic functionalities of a DRTOS implementation, and present the refinement methodology for using our DRTOS model during system level synthesis. The DRTOS model and refinement process are demonstrated in the SpecC SCE environment. The capabilities and limitations of the DRTOS modeling approach are presented

    A review on Reliability, Security and Memory Management of Numerous Operating Systems

    Get PDF
    With the improvement of technology and the growing needs of computer systems, it is needed to ensure that operating systems are able to provide the required functionalities. To provide these functionality operating systems are designed to maintain some design factors such as scalability, security, reliability, performance, memory management, energy efficiency. However, none of these factors can be achieved directly without facing any challenges. This research studied several design issues that are connected to each other in terms of providing an effective result. Therefore, this review article tried to reveal the major issues, which are independently more complex to solve at once. Finally, this research provides a guideline to overcome the challenges for future researchers by studying many research articles based on these design issues

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
    corecore