221,008 research outputs found

    Rhymes: a shared virtual memory system for non-coherent tiled many-core architectures

    Get PDF
    The rising core count per processor is pushing chip complexity to a level that hardware-based cache coherency protocols become too hard and costly to scale. We need new designs of many-core hardware and software other than traditional technologies to keep up with the ever-increasing scalability demands. The Intel Single-chip Cloud Computer (SCC) is a recent research processor exemplifying a new cluster-on-chip architecture which promotes a software-oriented approach instead of hardware support to implementing shared memory coherence. This paper presents a shared virtual memory (SVM) system, dubbed Rhymes, tailored to such a new processor kind of non-coherent and hybrid memory architectures. Rhymes features a two-way cache coherence protocol to enforce release consistency for pages allocated in shared physical memory (SPM) and scope consistency for pages in per-core private memory. It also supports page remapping on a per-core basis to boost data locality. We implement Rhymes on the SCC port of the Barrelfish OS. Experimental results show that our SVM outperforms the pure SPM approach used by Intel's software managed coherence (SMC) library by up to 12 times, with superlinear speedups (due to L2 cache effect) noted for applications with strong data reuse patterns.published_or_final_versio

    The Design of a Debugger Unit for a RISC Processor Core

    Get PDF
    Recently, there has been a significant increase in design complexity for Embedded Systems often referred to as Hardware Software Co-Design. Complexity in design is due to both hardware and firmware closely coupled together in-order to achieve features for low power, high performance and low area. Due to these demands, embedded systems consist of multiple interconnected hardware IPs with complex firmware algorithms running on the device. Often such designs are available in bare-metal form, i.e without an Operating System, which results in difficulty while debugging due to lack of insight into the system. As a result, development cycle and time to market are increased. One of the major challenges for bare-metal design is to capture internal data required during debugging or testing in the post silicon validation stage effectively and efficiently. Post-silicon validation can be performed by leveraging on different technologies such as hardware software co-verification using hardware accelerators, FPGA emulation, logic analyzers, and so on which reduces the complete development cycle time. This requires the hardware to be instrumented with certain features which support debugging capabilities. As there is no standard for debugging capabilities and debugging infrastructure, it completely depends on the manufacturer to manufacturer or designer to designer. This work aims to implement minimum required features for debugging a bare-metal core by instrumenting the hardware compatible for debugging. It takes into consideration the fact that for a single core bare-metal embedded systems silicon area is also a constraint and there must be a trade-off between debugging capabilities which can be implemented in hardware and portions handled in software. The paper discusses various debugging approaches developed and implemented on various processor platforms and implements a new debugging infrastructure by instrumenting the Open-source AMBER 25 core with a set of debug features such as breakpoints, current state read, trace and memory access. Interface between hardware system and host system is designed using a JTAG standard TAP controller. The resulting design can be used in debugging and testing during post silicon verification and validation stages. The design is synthesized using Synopsys Design Compiler targeting a 65 nm technology node and results are compared for the instrumented and non-instrumented system

    Integration of Mission Control System, On-board Computer Core and spacecraft Simulator for a Satellite Test Bench: Integration of Mission Control System,On-board Computer Core and spacecraft Simulator for a Satellite Test Bench

    Get PDF
    The satellite avionics platform has been developed in cooperation with Airbus and is called „Future Low-cost Platform“ (FLP). It is based on an Onboard Computer (OBC) with redundant processor boards based on SPARC V8 microchips of type Cobham Aeroflex UT699. At the University of Stuttgart a test bench with a real hardware OBC and a fully simulated satellite is available for testing real flight scenarios with the Onboard Software (OBSW) running on representative hardware. The test bench as later the real flying satellite "Flying Laptop" – is commanded from a real Ground Control Centre (GCC). The main challenges in the FLP project were - Onboard computer design, - Software design and - Interfaces between platform and payloads In the course of industrialization of this FLP platform technology for later use in satellite constellations, Airbus has started to set up an in-house test bench where all the technologies shall be developed. The initial plan is to get first core elements of the FLP OBSW ported to the new dual-core processor and the new Space Wire(SpW) routing network. The plan also has an inclusion of new Mission Control Software with which one can command the OBC. The new OBC has a dual core processor Cobham Gaisler GR712 and hence, all the payload and related functionality are to be implemented only in a second core which involves a lot of low-level task distribution. The consequent SpW router network application and dual-core platform/payload OBSW sharing are entirely new in the field of satellite engineering

    Intelligent Systems and Advanced User Interfaces for Design, Operation, and Maintenance of Command Management Systems

    Get PDF
    Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process

    The case for a Hardware Filesystem

    Get PDF
    As secondary storage devices get faster with flash based solid state drives (SSDs) and emerging technologies like phase change memories (PCM), overheads in system software like operating system (OS) and filesystem become prominent and may limit the potential performance improvements. Moreover, with rapidly increasing on-chip core count, monolithic operating systems will face scalability issues on these many-core chips. Future operating systems are likely to have a distributed nature, with a separation of operating system services amongst cores. Also, general purpose processors are known to be both performance and power inefficient while executing operating system code. In the domain of High Performance Computing with FPGAs too, relying on the OS for file I/O transactions using slow embedded processors, hinders performance. Migrating the filesystem into a dedicated hardware core, has the potential of improving the performance of data-intensive applications by bypassing the OS stack to provide higher bandwdith and reduced latency while accessing disks. To test the feasibility of this idea, an FPGA-based Hardware Filesystem (HWFS) was designed with five basic operations (open, read, write, delete and seek). Furthermore, multi-disk and RAID-0 (striping) support has been implemented as an option in the filesystem. In order to reduce design complexity and facilitate easier testing of the HWFS, a RAM disk was used initially. The filesystem core has been integrated and tested with a hardware application core (BLAST) as well as a multi-node FPGA network to provide remote-disk access. Finally, a SATA IP core was developed and directly integrated with HWFS to test with SSDs. For evaluation, HWFS's performance was compared to an Ext2 filesystem, both on an FPGA-based soft processor as well as a modern AMD Opteron Linux server with sequential and random workloads. Results prove that the Hardware Filesystem and supporting infrastructure provide substantial performance improvement over software only systems. The system is also resource efficient consuming less than 3% of logic and 5% of the Block RAMs of a Xilinx Virtex-6 chip

    New Japan Production Model, An Advanced Production Management Principle - Key To Strategic Implementation Of New JIT

    Get PDF
    An advanced production management principle, the New Japan Production Model to further advance TPS (Toyota Production System) called the Advanced TPS is proposed, which involves the systematization of Japanese production management methodology for strategic production. The New Japan Production Model—a new management technology principle, proposed and verified in previous studies—was developed through establishing a Global Production Technology and Management Model based on New JIT utilizing three core technologies (TMS, TDS, and TPS), which relates to hardware systems, and Science TQM, which relates to software systems. Formation of the model through utilization of these core technologies signifies the high linkage of business processes that enables a speedy production cycle by using “Intelligent Quality Control System, TPS-QAS”, “Highly Reliable Production System, V-MICS”, “Renovating Work Environment, TPS-IWQM” and “Bringing up Intelligent Operators, V-IOS”. Effectiveness of the proposed New Japan Production Model was verified at Toyota Motor Corporation

    Avionics standards, software and IMA

    Get PDF
    International audienceThe paper covers the definition of Integrated Modular Avionics (IMA), the associated avionics standards and the impact on the Avionics Software. ARINC and RTCA/EUROCAE committees, in which all Avionic stakeholders are involved, developed these standards. 2005 is a key year for standardization: ARINC653 part1 supplement2 and part3 are ready for publishing, RTCA-SC200 / EUROCAE-WG60 is under ballot. The concepts of IMA, the new architecture in Avionics, were defined in the late Eighties and published for the first time in the ARINC651 standard in 1991. The IMA concepts were firstly applied on Boeing 777, extended and used on Airbus A380 and now selected for the future Boeing 787. These concepts divide the avionic embedded domain into Platform (Hardware+Core Software) and Applications instead of Hardware and Software. Several applications of different criticality levels could reside on the same platform. The consequences were the development of new standards and guidelines for supporting these concepts, e.g.:-ARINC653 defines the API and the behavior of the Core Software services.-DO-255/ED-96 contains the description of an Avionic Computing Resource (a platform separated from its hosted applications).-DO-248B/ED-94B clarifies DO-178B/ED-12B and defines concepts like robust partitioning.-SC200/WG60 (future ED-124) contains the IMA Development Guidance and Certification.-SC205/WG71 has started. It reviews and extends DO-178B/ED-12B and DO-248B/ED-94B in regard of new technologies The paper describes the objectives and the results of these standardization committees. It focuses on ARINC653 and ED-124 standards and presents shortly the associated standards

    Application of Information and Communication Technology (ICT) in Service Delivery in Nigerian Private University Libraries

    Get PDF
    Information and Communication Technologies have pervaded every sphere of human endeavour and nowhere has its impact been felt more than in the library. ICT covers all forms of computer and communications equipment and software used to create, design, store, transmit, interpret and manipulate information in its various formats. Personal computers, laptops, tablets, mobile phones, transport systems, televisions, and network technologies are just some examples of the diverse array of ICT tools. The research was carried out because of the need to fill the knowledge gap of how private university libraries have utilized information and communication technologies to deliver services to their users. The research used the survey method and questionnaires were distributed to librarians in the two institutions and they were requested to fill the answers they felt were germane to the questions asked. Findings from the investigations revealed that private university libraries have made giant strides in the application of ICT hardware and software to deliver services to their users. It was also revealed that private university libraries utilized open-source integrated library software to deliver effective services. While the Babcock University Library used the KOHA integrated library software, the Veritas University Library used the OpenBiblio integrated library software. The use of these integrated library software has made core library functions such as cataloguing, classification and circulation easier and have rendered the “backlog syndrome” a thing of the past in private university libraries

    BioThreads: a novel VLIW-based chip multiprocessor for accelerating biomedical image processing applications

    Get PDF
    We discuss BioThreads, a novel, configurable, extensible system-on-chip multiprocessor and its use in accelerating biomedical signal processing applications such as imaging photoplethysmography (IPPG). BioThreads is derived from the LE1 open-source VLIW chip multiprocessor and efficiently handles instruction, data and thread-level parallelism. In addition, it supports a novel mechanism for the dynamic creation, and allocation of software threads to uncommitted processor cores by implementing key POSIX Threads primitives directly in hardware, as custom instructions. In this study, the BioThreads core is used to accelerate the calculation of the oxygen saturation map of living tissue in an experimental setup consisting of a high speed image acquisition system, connected to an FPGA board and to a host system. Results demonstrate near-linear acceleration of the core kernels of the target blood perfusion assessment with increasing number of hardware threads. The BioThreads processor was implemented on both standard-cell and FPGA technologies; in the first case and for an issue width of two, full real-time performance is achieved with 4 cores whereas on a mid-range Xilinx Virtex6 device this is achieved with 10 dual-issue cores. An 8-core LE1 VLIW FPGA prototype of the system achieved 240 times faster execution time than the scalar Microblaze processor demonstrating the scalability of the proposed solution to a state-of-the-art FPGA vendor provided soft CPU core
    • …
    corecore