8,112 research outputs found

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    An effective AMS Top-Down Methodology Applied to the Design of a Mixed-SignalUWB System-on-Chip

    Get PDF
    The design of Ultra Wideband (UWB) mixed-signal SoC for localization applications in wireless personal area networks is currently investigated by several researchers. The complexity of the design claims for effective top-down methodologies. We propose a layered approach based on VHDL-AMS for the first design stages and on an intelligent use of a circuit-level simulator for the transistor-level phase. We apply the latter just to one block at a time and wrap it within the system-level VHDL-AMS description. This method allows to capture the impact of circuit-level design choices and non-idealities on system performance. To demonstrate the effectiveness of the methodology we show how the refinement of the design affects specific UWB system parameters such as bit-error rate and localization estimations

    A VHDL-AMS Simulation Environment for an UWB Impulse Radio Transceiver

    Get PDF
    Ultra-Wide-Band (UWB) communication based on the impulse radio paradigm is becoming increasingly popular. According to the IEEE 802.15 WPAN Low Rate Alternative PHY Task Group 4a, UWB will play a major role in localization applications, due to the high time resolution of UWB signals which allow accurate indirect measurements of distance between transceivers. Key for the successful implementation of UWB transceivers is the level of integration that will be reached, for which a simulation environment that helps take appropriate design decisions is crucial. Owing to this motivation, in this paper we propose a multiresolution UWB simulation environment based on the VHDL-AMS hardware description language, along with a proper methodology which helps tackle the complexity of designing a mixed-signal UWB System-on-Chip. We applied the methodology and used the simulation environment for the specification and design of an UWB transceiver based on the energy detection principle. As a by-product, simulation results show the effectiveness of UWB in the so-called ranging application, that is the accurate evaluation of the distance between a couple of transceivers using the two-way-ranging metho

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Energy Detection UWB Receiver Design using a Multi-resolution VHDL-AMS Description

    Get PDF
    Ultra Wide Band (UWB) impulse radio systems are appealing for location-aware applications. There is a growing interest in the design of UWB transceivers with reduced complexity and power consumption. Non-coherent approaches for the design of the receiver based on energy detection schemes seem suitable to this aim and have been adopted in the project the preliminary results of which are reported in this paper. The objective is the design of a UWB receiver with a top-down methodology, starting from Matlab-like models and refining the description down to the final transistor level. This goal will be achieved with an integrated use of VHDL for the digital blocks and VHDL-AMS for the mixed-signal and analog circuits. Coherent results are obtained using VHDL-AMS and Matlab. However, the CPU time cost strongly depends on the description used in the VHDL-AMS models. In order to show the functionality of the UWB architecture, the receiver most critical functions are simulated showing results in good agreement with the expectations
    • …
    corecore