1,447 research outputs found

    6502 emulator on FPGA

    Get PDF
    6502 microprocessor was once used in almost all of the microcomputer in the 80s, including the Apple II lines of computer, the Commodore PET, the Commodore 64, the Atari 8-bit series and even on the Nintendo Entertainment System (NES) video game console. The objective of this project is to emulate the once famous 6502 microprocessor onto a FPGA chip. The FPGA-based 6502 microprocessor had to emulate the functionality of a real 6502 microprocessor. Accurate pinouts emulation is desired but not a must. The 6502 assembly language is easy to learn and building a computer based on this microprocessor requires very few parts, thus making this project a great experiential learning process. The scope of this project requires the student to have an in-depth understanding on computer system architecture, especially on 6502 architecture; V erilog to understand existing 6502 source code from Bird Computer and also FPGA development process (synthesis tools) to transfer the Verilog code to the FPGA chip. Thus far, the resources and information on 6502 microprocessor looks promising. The student earlier scope was to come up with the 6502 code in Verilog HDL, but as there is available code from Bird Computer (State Machine coded) so the student had chanced his objectives to understand the existing code and implement it on FPGA only. But as along the way, problems occur on hardware implementation, focus had been switched again to simulate the existing code or ALU or simple processor to build up student understanding and for documentation for future project expansion. To test the functionality of the 6502 system, the student will either find existing application or come up with simple program to run using the FPGA-based 6502 system

    Low Power Processor Architectures and Contemporary Techniques for Power Optimization – A Review

    Get PDF
    The technological evolution has increased the number of transistors for a given die area significantly and increased the switching speed from few MHz to GHz range. Such inversely proportional decline in size and boost in performance consequently demands shrinking of supply voltage and effective power dissipation in chips with millions of transistors. This has triggered substantial amount of research in power reduction techniques into almost every aspect of the chip and particularly the processor cores contained in the chip. This paper presents an overview of techniques for achieving the power efficiency mainly at the processor core level but also visits related domains such as buses and memories. There are various processor parameters and features such as supply voltage, clock frequency, cache and pipelining which can be optimized to reduce the power consumption of the processor. This paper discusses various ways in which these parameters can be optimized. Also, emerging power efficient processor architectures are overviewed and research activities are discussed which should help reader identify how these factors in a processor contribute to power consumption. Some of these concepts have been already established whereas others are still active research areas. © 2009 ACADEMY PUBLISHER

    From specialized to core course in Telecommunications degree: Experiences from digital electronic design and verification

    Get PDF
    [EN] The European Higher Education Area (EHEA) defines the competences for professional practice of a Telecommunications Engineer. The School of Telecommunication Engineering of the Universitat Politècnica de València (Valencia, Spain) provides an integrated education program consisting of a Graduate (GITST) + Master (MUIT). The GITST course offers four specialization tracks: Electronics, Telematics, Communication Systems and Multimedia for the proper acquisition of knowledge and competences of the future Telecommunications Engineers. In 2018, the graduate program has implemented a structural change in the organization of subjects for reinforcing important skills, in which a course on digital electronics design and verification (Integration of Digital Systems, ISDIGI) has been transformed into a core subject of the study plan. In this paper, we describe the methodology and adaptation of ISDIGI (i.e. a project-based learning intermediate HDL course that includes design and verification abilities) to the new GITST Curriculum. In addition, this paper describes the process of moving from specialized to core subject.Martínez Millana, A.; Liberos Mascarell, A.; Monzó Ferrer, JM.; Martínez Peiró, MA.; Martínez Pérez, JD.; Gadea Gironés, R. (2020). From specialized to core course in Telecommunications degree: Experiences from digital electronic design and verification. Editorial Universitat Politècnica de València. 229-238. https://doi.org/10.4995/INN2019.2019.10133OCS22923

    Innovative teaching of IC design and manufacture using the Superchip platform

    No full text
    In this paper we describe how an intelligent chip architecture has allowed a large cohort of undergraduate students to be given effective practical insight into IC design by designing and manufacturing their own ICs. To achieve this, an efficient chip architecture, the “Superchip”, has been developed, which allows multiple student designs to be fabricated on a single IC, and encapsulated in a standard package without excessive cost in terms of time or resources. We demonstrate how the practical process has been tightly coupled with theoretical aspects of the degree course and how transferable skills are incorporated into the design exercise. Furthermore, the students are introduced at an early stage to the key concepts of team working, exposure to real deadlines and collaborative report writing. This paper provides details of the teaching rationale, design exercise overview, design process, chip architecture and test regime

    A Project-based Approach to FPGA-aided Teaching of Digital Systems

    Get PDF
    This article shares experience and lessons learned in teaching course on programmable logic design at Universitas Muhammadiyah Surakarta, Indonesia This course is part of bachelor of engineering (electrical) degree program. Project- based approach is chosen to strengthen these students’ un- derstanding and practical skills. Each year’s project involves challenges for the students to solve by implementing digital system on an FPGA design board. Here, background and curriculum context of the course will be presented. The projects and their challenges will be discussed. Finally, lessons learned and future improvement on the student projects will be discussed. Index Terms—project-based learning, field programmable gate arrays, education, programmable logic design, hardware design languages, laboratories    

    Neural network computing using on-chip accelerators

    Get PDF
    The use of neural networks, machine learning, or artificial intelligence, in its broadest and most controversial sense, has been a tumultuous journey involving three distinct hype cycles and a history dating back to the 1960s. Resurgent, enthusiastic interest in machine learning and its applications bolsters the case for machine learning as a fundamental computational kernel. Furthermore, researchers have demonstrated that machine learning can be utilized as an auxiliary component of applications to enhance or enable new types of computation such as approximate computing or automatic parallelization. In our view, machine learning becomes not the underlying application, but a ubiquitous component of applications. This view necessitates a different approach towards the deployment of machine learning computation that spans not only hardware design of accelerator architectures, but also user and supervisor software to enable the safe, simultaneous use of machine learning accelerator resources. In this dissertation, we propose a multi-transaction model of neural network computation to meet the needs of future machine learning applications. We demonstrate that this model, encompassing a decoupled backend accelerator for inference and learning from hardware and software for managing neural network transactions can be achieved with low overhead and integrated with a modern RISC-V microprocessor. Our extensions span user and supervisor software and data structures and, coupled with our hardware, enable multiple transactions from different address spaces to execute simultaneously, yet safely. Together, our system demonstrates the utility of a multi-transaction model to increase energy efficiency improvements and improve overall accelerator throughput for machine learning applications

    A short curriculum of the robotics and technology of computer lab

    Get PDF
    Our research Lab is directed by Prof. Anton Civit. It is an interdisciplinary group of 23 researchers that carry out their teaching and researching labor at the Escuela Politécnica Superior (Higher Polytechnic School) and the Escuela de Ingeniería Informática (Computer Engineering School). The main research fields are: a) Industrial and mobile Robotics, b) Neuro-inspired processing using electronic spikes, c) Embedded and real-time systems, d) Parallel and massive processing computer architecture, d) Information Technologies for rehabilitation, handicapped and elder people, e) Web accessibility and usability In this paper, the Lab history is presented and its main publications and research projects over the last few years are summarized.Nuestro grupo de investigación está liderado por el profesor Civit. Somos un grupo multidisciplinar de 23 investigadores que realizan su labor docente e investigadora en la Escuela Politécnica Superior y en Escuela de Ingeniería Informática. Las principales líneas de investigaciones son: a) Robótica industrial y móvil. b) Procesamiento neuro-inspirado basado en pulsos electrónicos. c) Sistemas empotrados y de tiempo real. d) Arquitecturas paralelas y de procesamiento masivo. e) Tecnología de la información aplicada a la discapacidad, rehabilitación y a las personas mayores. f) Usabilidad y accesibilidad Web. En este artículo se reseña la historia del grupo y se resumen las principales publicaciones y proyectos que ha conseguido en los últimos años

    Microcontroller-based multiple-input multiple-output transmitter systems

    Get PDF
    Multiple-Input Multiple_output (MIMO) Systems use multiple antennas at both the transmitter and receiver to increase data throughput and/or system reliability. An MIMO transmitter can be implemented using a variety of approaches. This work describes some of the approaches that can be used to generate the transmitted waveforms, and discuss the features and limitation of each. In particular, it shows haw a microcontroller-based system can be used for applications which require low power consumption. This thesis also describes the high-level design of a microcontroller-based MIMO transmitter. The computational speed of the microcontroller, as compared to Field-programmable Gate Array (FPGA) and Digital Signal Processors (DSP), coupled with other additional tasks which it may need to handle limit the transmitted data-rate. However, this low power and low cost design may make it attractive for some applications --Abstract, page iii

    Implementation of Genetic Algorithms in FPGA-based Reconfigurable Computing Systems

    Get PDF
    Genetic Algorithms (GAs) are used to solve many optimization problems in science and engineering. GA is a heuristics approach which relies largely on random numbers to determine the approximate solution of an optimization problem. We use the Mersenne Twister Algorithm (MTA) to generate a non-overlapping sequence of random numbers with a period of 219937-1. The random numbers are generated from a state vector that consists of 624 elements. Our work on state vector generation and the GA implementation targets the solution of a flow-line scheduling problem where the flow-lines have jobs to process and the goal is to find a suitable completion time for all jobs using a GA. The state vector generation algorithm (MTA) performs poorly in traditional von Neumann architectures due to its poor temporal and spatial locality. Therefore its performance is limited by the speed at which we can access memory. With an approximate increase of processor performance by 60% per year and a drop of memory latency only 7% per year, a new approach is needed for performance improvement. On the other hand, the GA implementation in a general-purpose microprocessor, though performs reasonably well, has scope for performance gain in a parallel implementation. The parallel implementation of the GA can work as a kernel for applications that uses a GA to reach a solution. Our approach is to implement the state vector generation process and the GA in an FPGA-based Reconfigurable Computing (RC) system with the goal of improving the overall performance. Application design for FPGA-based RC systems is not trivial and the performance improvement is not guaranteed. Designing for RC systems requires algorithmic parallelism in order to exploit the inherent parallelism of the FPGA. We are using a high-level language that provides a level of abstraction from the lower-level hardware in the RC system making it difficult to fully exploit some of the architectural benefits of the FPGA. Considering these factors, we improve the state vector generation process algorithmically. Our implementation generates state vectors 5X faster than the previous implementation in an Intel Xeon microprocessor of 2GHz. The modified algorithm is also implemented in a Xilinx Virtex-4 FPGA that results in a 2.4X speedup. Improvement in this preprocessing step accelerates GA application performance as random numbers are generated from these state vectors for the genetic operators. We simulate the basic operations of a GA in an FPGA to study its behavior in a parallel environment and analyze the results. The initial FPGA implementation of the GA runs about 7X slower than its microprocessor counterpart. The reasons are explained along with suggestions for improvement and future work

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    © 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other
    • …
    corecore