255 research outputs found

    Approximation and Optimization of an Auditory Model for Realization in VLSI Hardware

    Get PDF
    The Auditory Image Model (AIM) is a software tool set developed to functionally model the role of the ear in the human hearing process. AIM includes detailed filter equations for the major functional portions of the ear. Currently, AIM is run on a workstation and requires 10 to 100 times real-time to process audio information and produce an auditory image. An all-digital approximation of the AIM which is suitable for implementation in very large scale integrated circuits is presented. This document details the mathematical models of AIM and the approximations and optimizations used to simplify the filtering and signal processing accomplished by AIM. Included are the details of an efficient multi-rate architecture designed for sub-micron VLSI technology to carry out the approximated equations. Finally, simulation results which indicate that the architecture, when implemented in 0.8”m CMOS VLSI, will sustain real- time operation on a 32 channel system are included. The same tests also indicate that the chip will be approximately 3.3 mm2, and consume approximately 18 mW. The details of a new and efficient method for computing an approximate logarithm (base two) on binary integers is also presented. The approximate logarithm algorithm is used to convert sound energy into millibels quickly and with low power. Additionally, the algorithm, is easily extended to compute an approximate logarithm in base ten which broadens the class of problems to which it may be applied

    Techniques for low-cost spectrum analysis on quadrature demodulation architectures

    Get PDF
    The Decimator, an SED Systems Ltd. product, is a PCI slot card that performs both time and frequency domain measurements of given input signals. It is essentially a more economical version of a bench spectrum analyzer or oscilloscope, with a PC interface. Several issues limit the speed and accuracy of the results of the Decimator, and the study of these issues is the focus of this thesis. These issues, including but not limited to, are as follows: 1) Imbalances between the received In-phase and Quadrature-phase channels; 2) The FFT and Windowing functions are performed by a microcontroller, but it is desired that they be migrated to an FPGA. While solutions to improve the first issue is being implemented and verified, the second issue is not one of simply reducing a source of error. The second issue requires a cost-benefit analysis on the migration of these signal processing algorithms from an ARM microcontroller to a Xilinx FPGA

    The DS-Pnet modeling formalism for cyber-physical system development

    Get PDF
    This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models. Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems. A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators. The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript). A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs. Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications

    Hardware-software codesign in a high-level synthesis environment

    Get PDF
    Interfacing hardware-oriented high-level synthesis to software development is a computationally hard problem for which no general solution exists. Under special conditions, the hardware-software codesign (system-level synthesis) problem may be analyzed with traditional tools and efficient heuristics. This dissertation introduces a new alternative to the currently used heuristic methods. The new approach combines the results of top-down hardware development with existing basic hardware units (bottom-up libraries) and compiler generation tools. The optimization goal is to maximize operating frequency or minimize cost with reasonable tradeoffs in other properties. The dissertation research provides a unified approach to hardware-software codesign. The improvements over previously existing design methodologies are presented in the frame-work of an academic CAD environment (PIPE). This CAD environment implements a sufficient subset of functions of commercial microelectronics CAD packages. The results may be generalized for other general-purpose algorithms or environments. Reference benchmarks are used to validate the new approach. Most of the well-known benchmarks are based on discrete-time numerical simulations, digital filtering applications, and cryptography (an emerging field in benchmarking). As there is a need for high-performance applications, an additional requirement for this dissertation is to investigate pipelined hardware-software systems\u27 performance and design methods. The results demonstrate that the quality of existing heuristics does not change in the enhanced, hardware-software environment

    Definition and design of a new communication protocol and interfaces for data transmission in High Energy Physics experiments

    Get PDF
    High Energy Physics experiments have very similar architectures with respect to systems for acquisition of data from sensors and for control and management of the detector, and therefore similar requirements about data rate, trigger latency, robustness of critical data against transmission errors, radiation hardness and power dissipation and of hardware components and material budget. The use of common solutions that can be reused in different applicative contexts can reduce costs, risks and time needed for the development of new experiments. In particular, a research and development activity appeared as useful in the field of electrical links that are employed for data transmission to and from Front End circuits inside the detectors to move power-consuming optical converters away from the interaction point. Moving from these considerations, the FF-LYNX (Fast and Flexible links) project was started in January 2009 by a collaboration between INFN-PI (Italian National Institute for Nuclear Physics, division of Pisa) and the Department of Information Engineering (DII_IET) of the University of Pisa, with the aim of defining a new serial communication protocol for integrated distribution of TTC signals and Data Acquisition, satisfying the typical requirements of HEP applications and providing flexibility for its adaptation to different scenarios, and of its implementation in radiation-tolerant, low power interfaces. The work presented in this thesis constituted a phase of the FF-LYNX project working plan and was carried out at the Pisa division of INFN: in particular, it dealt with the definition of a first version of the FF-LYNX protocol and the design of hardware transmitter and receiver interfaces implementing it. In this thesis first of all the purposes of the project are presented and the methodology defined for the project work is outlined; then the FF-LYNX protocol (version 1) is described: the basic issues about trigger and data transmission that were considered in the definition of this version of the protocol are outlined, as well as the solutions that were adopted to address these issues, and the results of simulations in a high-level model of the link, intended to estimate various aspects of the protocol performance, are presented. Subsequently, the architecture that was defined for the interfaces implementing the FF-LYNX protocol version 1 is illustrated, and the VHDL models of the transmitter and receiver blocks that was created in the design phase of the FF-LYNX interfaces is described in detail also reporting results of simulations on a VHDL test bench for the complete transmitter-receiver system. Finally, an FPGA based emulator for the FF-LYNX transmitter-receiver system, foreseen as the final result for the FF-LYNX project first year of activity, is outlined in its functional architecture, the development board chosen for its implementation is briefly described, and the results of preliminary synthesis trials of the designed TX and RX blocks onto the target FPGA are reported

    Hardware-software model co-simulation for GPU IP development

    Get PDF
    This Master's thesis project aims to explore the possibility of a mixed simulation environment in which parts of a software model for emulating a hardware design may be swapped with their corresponding RTL description. More specifically, this work focuses on the sofware model for Arm's next-generation Mali GPU, which is used to understand system on chip properties, including functionality and performance. A component of this model (written in C++) is substituted with its hardware model (written in SystemVerilog) to be able to run simulations in a system context at a faster emulation speed, and with higher accuracy in the results compared to a pure-software model execution. For this, a "co-simulation" environment is developed, using SystemVerilog's DPI-C as the main communication interface between C++ and SystemVerilog. The proposed environment contains new software and hardware blocks to enable the desired objective without major modifications in neither the software Mali model nor the substituted component. Metrics and results for characterizing this co-simulation environment are also provided, namely timing accuracy, data correctness and simulation time with respect to other previously available simulation options. These results hope to show that the proposed environment may open new use-cases and improve development and verification time of hardware components in a system such as the Mali GPU.The possibility of combining hardware designs and software in the same simulation environment opens new options and improves significantly the flexibility of verification processes as well as characterization time of electronic designs. A practical method to realize this is developed and presented in this work for the case of a real Graphics Processing Unit IP. Nowadays electronics designers and manufacturers compete in an increasingly faster race to be able to provide the best and most efficient solutions to the market's expectations. The easiest example is the tendency of smartphone designers to provide a brand-new mobile phone model every year to meet consumers' demand. To meet these tighter and tighter deadlines, these companies need to find new ways of designing and verifying their products faster and more efficiently. In this context enters the work presented in this thesis: One of many possible solutions to improve the verification time of a hardware unit/block. Digital electronic circuits are commonly designed and modelled using Hardware Design Languages (HDLs), which are similar to computer languages such as C or Java, but different in the sense that HDLs actually describe the physical layout and connections of a digital circuit. These HDL designs can be simulated to verify their correct performance and characteristics with very high detail but, at the same time, this type of simulations are costly in terms of computational time and resources, due to the nature of the magnitudes and mechanisms being replicated on the computer running the simulation. On the other hand, software is written in computer languages directly, compiled to machine language and run sequentially by computers, in a much faster and efficient manner. Therefore, what if the best of the two could be combined to simulate a digital design in which only a specific internal block is described in a HDL while the rest of the design is a software program? This would allow to reduce the simulation time of that block greatly, while at the same type preserve the accuracy that a simulation of a HDL design can provide. This thesis work is based on a specific part of Arm's next-generation Mali Graphics Processing Unit (GPU), for which a solution for mixing hardware and software in the same simulation is proposed. For this specific case, such mechanism will allow to improve the development and testing time of new features for a Mali hardware IP, while at the same time open new use-cases for future work in this direction

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Dynamic Polymorphic Reconfiguration to Effectively “CLOAK” a Circuit’s Function

    Get PDF
    Today\u27s society has become more dependent on the integrity and protection of digital information used in daily transactions resulting in an ever increasing need for information security. Additionally, the need for faster and more secure cryptographic algorithms to provide this information security has become paramount. Hardware implementations of cryptographic algorithms provide the necessary increase in throughput, but at a cost of leaking critical information. Side Channel Analysis (SCA) attacks allow an attacker to exploit the regular and predictable power signatures leaked by cryptographic functions used in algorithms such as RSA. In this research the focus on a means to counteract this vulnerability by creating a Critically Low Observable Anti-Tamper Keeping Circuit (CLOAK) capable of continuously changing the way it functions in both power and timing. This research has determined that a polymorphic circuit design capable of varying circuit power consumption and timing can protect a cryptographic device from an Electromagnetic Analysis (EMA) attacks. In essence, we are effectively CLOAKing the circuit functions from an attacker

    Fault-based Analysis of Industrial Cyber-Physical Systems

    Get PDF
    The fourth industrial revolution called Industry 4.0 tries to bridge the gap between traditional Electronic Design Automation (EDA) technologies and the necessity of innovating in many indus- trial fields, e.g., automotive, avionic, and manufacturing. This complex digitalization process in- volves every industrial facility and comprises the transformation of methodologies, techniques, and tools to improve the efficiency of every industrial process. The enhancement of functional safety in Industry 4.0 applications needs to exploit the studies related to model-based and data-driven anal- yses of the deployed Industrial Cyber-Physical System (ICPS). Modeling an ICPS is possible at different abstraction levels, relying on the physical details included in the model and necessary to describe specific system behaviors. However, it is extremely complicated because an ICPS is com- posed of heterogeneous components related to different physical domains, e.g., digital, electrical, and mechanical. In addition, it is also necessary to consider not only nominal behaviors but even faulty behaviors to perform more specific analyses, e.g., predictive maintenance of specific assets. Nevertheless, these faulty data are usually not present or not available directly from the industrial machinery. To overcome these limitations, constructing a virtual model of an ICPS extended with different classes of faults enables the characterization of faulty behaviors of the system influenced by different faults. In literature, these topics are addressed with non-uniformly approaches and with the absence of standardized and automatic methodologies for describing and simulating faults in the different domains composing an ICPS. This thesis attempts to overcome these state-of-the-art gaps by proposing novel methodologies, techniques, and tools to: model and simulate analog and multi-domain systems; abstract low-level models to higher-level behavioral models; and monitor industrial systems based on the Industrial Internet of Things (IIOT) paradigm. Specifically, the proposed contributions involve the exten- sion of state-of-the-art fault injection practices to improve the ICPSs safety, the development of frameworks for safety operations automatization, and the definition of a monitoring framework for ICPSs. Overall, fault injection in analog and digital models is the state of the practice to en- sure functional safety, as mentioned in the ISO 26262 standard specific for the automotive field. Starting from state-of-the-art defects defined for analog descriptions, new defects are proposed to enhance the IEEE P2427 draft standard for analog defect modeling and coverage. Moreover, dif- ferent techniques to abstract a transistor-level model to a behavioral model are proposed to speed up the simulation of faulty circuits. Therefore, unlike the electrical domain, there is no extensive use of fault injection techniques in the mechanical one. Thus, extending the fault injection to the mechanical and thermal fields allows for supporting the definition and evaluation of more reliable safety mechanisms. Hence, a taxonomy of mechanical faults is derived from the electrical domain by exploiting the physical analogies. Furthermore, specific tools are built for automatically instru- menting different descriptions with multi-domain faults. The entire work is proposed as a basis for supporting the creation of increasingly resilient and secure ICPS that need to preserve functional safety in any operating context
    • 

    corecore