599 research outputs found

    Exploiting robustness in asynchronous circuits to design fine-tunable systems

    Get PDF
    PhD ThesisRobustness property in a circuit defines its tolerance to the effects of process, voltage and temperature variations. The mode signaling and event communication between computing units in a asynchronous circuits makes them inherently robust. The level of robustness depends on the type of delay assumptions used in the design and specification process. In this thesis, two approaches to exploiting robustness in asynchronous circuits to design self-adapting and fine-tunable systems are investigated. In the first investigation, a Digitally Controllable Oscillator (DCO) and a computing unit are integrated such that the operating conditions of the computing unit modulated the operation of the DCO. In this investigation, the computing unit which is a self-timed counter interacts with the DCO in a four-phase handshake protocol. This mode of interaction ensures a DCO and computing unit system that can fine-tune its operation to adapt to the effects of variations. In this investigation, it is shown that such a system will operate correctly in wide range of voltage supply. In the second investigation, a Digital Pulse-Width Modulator (DPWM) with coarse and fine-tune controls is designed using two Kessels counters. The coarse control of the DPWM tuned the pulse ratio and pulse frequency while the fine-tune control exploited the robustness property of asynchronous circuits in an addition-based delay system to add or subtract delay(s) to the pulse width while maintaining a constant pulse frequency. The DPWM realized gave constant duty ratio regardless of the operating voltage. This type of DPWM has practical application in a DC-DC converter circuit to tune the output voltage of the converter in high resolution. The Kessels counter is a loadable self-timed modulo−n counter, which is realized by decomposition using Horner’s method, specified and verified using formal asynchronous design techniques. The decomposition method used introduced parallelism in the system by dividing the counter into a systolic array of cells, with each cell further decomposed into two parts that have distinct defined operations. Specification of the decomposed counter cell parts operation was in three stages. The first stage employed high-level specification using Labelled Petri nets (LPN). In this form, functional correctness of the decomposed counter is modelled and verified. In the second stage, a cell part is specified by combing all possible operations for that cell part in high-level form. With this approach, a combination of inputs from a defined control block activated the correct operation for a cell part. In the final stage, the LPNs were converted to Signal Transition Graphs, from which the logic circuits of the cells were synthesized using the WorkCraft Tool. In this thesis, the Kessels counter was implemented and fabricated in 350 nm CMOS Technology.Niger Delta Development Commission (NDD

    Design of asynchronous microprocessor for power proportionality

    Get PDF
    PhD ThesisMicroprocessors continue to get exponentially cheaper for end users following Moore’s law, while the costs involved in their design keep growing, also at an exponential rate. The reason is the ever increasing complexity of processors, which modern EDA tools struggle to keep up with. This makes further scaling for performance subject to a high risk in the reliability of the system. To keep this risk low, yet improve the performance, CPU designers try to optimise various parts of the processor. Instruction Set Architecture (ISA) is a significant part of the whole processor design flow, whose optimal design for a particular combination of available hardware resources and software requirements is crucial for building processors with high performance and efficient energy utilisation. This is a challenging task involving a lot of heuristics and high-level design decisions. Another issue impacting CPU reliability is continuous scaling for power consumption. For the last decades CPU designers have been mainly focused on improving performance, but “keeping energy and power consumption in mind”. The consequence of this was a development of energy-efficient systems, where energy was considered as a resource whose consumption should be optimised. As CMOS technology was progressing, with feature size decreasing and power delivered to circuit components becoming less stable, the energy resource turned from an optimisation criterion into a constraint, sometimes a critical one. At this point power proportionality becomes one of the most important aspects in system design. Developing methods and techniques which will address the problem of designing a power-proportional microprocessor, capable to adapt to varying operating conditions (such as low or even unstable voltage levels) and application requirements in the runtime, is one of today’s grand challenges. In this thesis this challenge is addressed by proposing a new design flow for the development of an ISA for microprocessors, which can be altered to suit a particular hardware platform or a specific operating mode. This flow uses an expressive and powerful formalism for the specification of processor instruction sets called the Conditional Partial Order Graph (CPOG). The CPOG model captures large sets of behavioural scenarios for a microarchitectural level in a computationally efficient form amenable to formal transformations for synthesis, verification and automated derivation of asynchronous hardware for the CPU microcontrol. The feasibility of the methodology, novel design flow and a number of optimisation techniques was proven in a full size asynchronous Intel 8051 microprocessor and its demonstrator silicon. The chip showed the ability to work in a wide range of operating voltage and environmental conditions. Depending on application requirements and power budget our ASIC supports several operating modes: one optimised for energy consumption and the other one for performance. This was achieved by extending a traditional datapath structure with an auxiliary control layer for adaptable and fault tolerant operation. These and other optimisations resulted in a reconfigurable and adaptable implementation, which was proven by measurements, analysis and evaluation of the chip.EPSR

    A distributed control microprocessor system

    Get PDF
    Imperial Users onl

    Design of digital systems

    Get PDF

    Guided direct time-of-flight Lidar for self-driving vehicles

    Get PDF
    Self-driving vehicles demand efficient and reliable depth-sensing technologies. Lidar, with its capacity for long-distance, high-precision measurement, is a crucial component in this pursuit. However, conventional mechanical scanning implementations suffer from reliability, cost, and frame rate limitations. Solid-state lidar solutions have emerged as a promising alternative, but the vast amount of photon data processed and stored using conventional direct time-of-flight (dToF) prevents long-distance sensing unless power-intensive partial histogram approaches are used. This research introduces a pioneering ‘guided’ dToF approach, harnessing external guidance from other onboard sensors to narrow down the depth search space for a power and data-efficient solution. This approach centres around a dToF sensor in which the exposed time widow of independent pixels can be dynamically adjusted. A pair of vision cameras are used in this demonstrator to provide the guiding depth estimates. The implemented guided dToF demonstrator successfully captures a dynamic outdoor scene at 3 fps with distances up to 75 m. Compared to a conventional full histogram approach, on-chip data is reduced by over 25 times, while the total laser cycles in each frame are reduced by at least 6 times compared to any partial histogram approach. The capability of guided dToF to mitigate multipath reflections is also demonstrated. For self-driving vehicles where a wealth of sensor data is already available, guided dToF opens new possibilities for efficient solid-state lidar

    A Multi-Hop 6LoWPAN Wireless Sensor Network for Waste Management Optimization

    Get PDF
    In the first part of this Thesis several Wireless Sensor Network technologies, including the ones based on the IEEE 802.15.4 Protocol Standard like ZigBee, 6LoWPAN and Ultra Wide Band, as well as other technologies based on other protocol standards like Z-Wave, Bluetooth and Dash7, are analyzed with respect to relevance and suitability with the Waste Management Outsmart European FP7 Project. A particular attention is given to the parameters which characterize a Large Scale WSN for Smart Cities, due to the amount of sensors involved and to the practical application requested by the project. Secondly, a prototype of sensor network is proposed: an Operative System named Contiki is chosen for its portability on different hardware platforms, its Open Source license, for the use of the 6LoW-PAN protocol and for the implementation of the new RPL routing protocol. The Operative System is described in detail, with a special focus on the uIPv6 TCP/IP stack and RPL implementation. With regard to this innovative routing proto col designed specifically for Low Power Lossy Networks, chapter 4 describes in detail how the network topology is organized as a Directed Acyclic Graph, what is an RPL Instance and how downward and upward routes are constructed and maintained. With the use of several AVR Atmel modules mounting the Contiki OS a real WSN is created and, with an Ultrasonic Sensor, the filling level of a waste basket prototype is periodically detected and transmitted through a multi-hop wireless network to a sink nodeope

    Proof Planning for Automating Hardware Verification

    Get PDF
    Centre for Intelligent Systems and their ApplicationsIn this thesis we investigate the applicability of proof planning to automate the verification of hardware systems. Proof planning is a meta-level reasoning technique which captures patterns of proof common to a family of theorems. It contributes to the automation of proof by incorporating and extending heuristics found in the Nqthm theorem prover and using them to guide a tactic-based theorem prover in the search for a proof. We have addressed the automation of proof for hardware verification from a proof planning perspective, and have applied the strategies and search control mechanisms of proof planning to generate automatically customised tactics which prove conjectures about the correctness of many types of circuits. The contributions of this research can be summarised as follows: (1) we show by experimentation the applicability of the proof planning ideas to verify automatically hardware designs;(2)we develop and use a methodology based on the concept of proof engineering using proof planning to verify various combinational and sequential circuits which include: arithmetic circuits (adders, subtracters, multipliers, dividers, factorials), data-path components arithmetic logic units shifters, processing units) and a simple microprocessor system; and (3) we contribute to the profiling of the Clam proof planning system by improving its robustness and efficiency in handling large terms and proofs. In verifying hardware, the user formalises a problem by writing the specification, the implementation and the conjecture, using a logic language, and asks Clam to compose a tactic to prove the conjecture. This tactic is then executed by the Oyster prover. To compose a tactic, Clam uses a set of methods which implement the heuristics that specify general-purpose tactics, and AI planning mechanisms. Search is controlled by a type of annotated rewriting called rippling, which controls the selective application of rewrite scaled wave rules. We have extended some of the Clam's methods to verify circuits.The size of the proofs were orders of magnitude larger than the proofs that had been attempted before with proof planning, and are comparable with similar verification proofs obtained by other systems but using fewer lemmas and less interaction. Proof engineering refers to the application of formal proof for system design and verification. We propose a proof engineering methodology which consists of partitioning the automation of formal proof into three different kind of tasks: user, proof and systems tasks.User tasks have to do with formalising a particular verification problem and using a formal tool to obtain a proof. Proof tasks refer to the tuning of proof techniques (e.g. methods and tactics)to help obtain a proof. Systems tasks have to do with the modification of a formal tool system. By making this distinction explicit, proof development is more manageable. We conjecture that our approach is widely applicable and can be integrated into formal verification environments to improve automation facilities, and be utilised to verify commercial and safety-critical hardware systems in industrial settings

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    A flight software development and simulation framework for advanced space systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2002.Includes bibliographical references (p. 293-302).Distributed terrestrial computer systems employ middleware software to provide communications abstractions and reduce software interface complexity. Embedded applications are adopting the same approaches, but must make provisions to ensure that hard real-time temporal performance can be maintained. This thesis presents the development and validation of a middleware system tailored to spacecraft flight software development. Our middleware runs on the Generalized Flight Operations Processing Simulator (GFLOPS) and is called the GFLOPS Rapid Real-time Development Environment (GRRDE). GRRDE provides publish-subscribe communication services between software components. These services help to reduce the complexity of managing software interfaces. The hard real-time performance of these services has been verified with General Timed Automata modelling and extensive run-time testing. Several example applications illustrate the use of GRRDE to support advanced flight software development. Two technology-focused studies examine automatic code generation and autonomous fault protection within the GRRDE framework. A complex simulation of the TechSat 21 distributed spacebased radar mission highlights the utility of the approach for large-scale applications.by John Patrick Enright.Ph.D
    • 

    corecore