22 research outputs found

    Design of a receiver for measurement of real-time ionospheric reflection height

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2005The HF (high frequency) radar at Kodiak Island, Alaska, is part of the SuperDARN (Super Dual Auroral Radar Network) network of radars designed to detect echoes from ionospheric field-aligned density irregularities. Normal azimuth scans of the radar begin on whole minute boundaries leading to 12 s downtime between each scan. The radar makes use of this down time, by stepping through eight different frequencies for each beam direction using 1 or 2 s integration periods. A new receiver system has been developed at Poker Flat Research Range (PFRR), to utilize the ground scatter returns from radar's sounding mode of operation and calculate the ionospheric virtual reflection height. This would result in considerable improvement in the accuracy of critical frequency and Angle Of Arrival (AOA) estimations made by the Kodiak SuperDARN.Introduction -- Background -- Structure of the ionosphere -- Photoionization -- Recombination -- Layers -- Ionospheric refraction -- Ionospheric propagation -- Reflection at vertical incidence -- Virtual height concept -- Oblique incidence -- Motivation -- Problem statement and proposed solution -- Equipment overview -- Basic radar definitions -- Overview of the HF radar at Kodiak -- Frequency operation -- Sounding mode -- Antennas -- Power -- Receiver antenna -- Reflector analysis -- GPS clock card -- Clock card specifications -- Overview of PCI card countrol/status registers -- The synchronized generator : GPS mode outline -- Software time capture -- Event time capture -- Receiver card -- specifications -- The system design and implementation -- Specifications -- The pulse sequence -- The QNX operating system -- Configuring the clock card -- Configuring the GC214 -- Sampling -- Mixing -- Decimation -- Filtering -- Resampling -- GC214 latency -- Gain -- Data header format -- Direct memory access (DMA) -- DMA buffer creation -- RAM--disk -- External trigger synchronization -- Signal processing code -- Link budget -- Results and future work -- Final code -- Results -- Errors -- Applications -- Future work -- Bibliography

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    Design for Reliability and Low Power in Emerging Technologies

    Get PDF
    Die fortlaufende Verkleinerung von Transistor-StrukturgrĂ¶ĂŸen ist einer der wichtigsten Antreiber fĂŒr das Wachstum in der Halbleitertechnologiebranche. Seit Jahrzehnten erhöhen sich sowohl Integrationsdichte als auch KomplexitĂ€t von Schaltkreisen und zeigen damit einen fortlaufenden Trend, der sich ĂŒber alle modernen FertigungsgrĂ¶ĂŸen erstreckt. Bislang ging das Verkleinern von Transistoren mit einer Verringerung der Versorgungsspannung einher, was zu einer Reduktion der Leistungsaufnahme fĂŒhrte und damit eine gleichbleibenden Leistungsdichte sicherstellte. Doch mit dem Beginn von StrukturgrĂ¶ĂŸen im Nanometerbreich verlangsamte sich die fortlaufende Skalierung. Viele Schwierigkeiten, sowie das Erreichen von physikalischen Grenzen in der Fertigung und Nicht-IdealitĂ€ten beim Skalieren der Versorgungsspannung, fĂŒhrten zu einer Zunahme der Leistungsdichte und, damit einhergehend, zu erschwerten Problemen bei der Sicherstellung der ZuverlĂ€ssigkeit. Dazu zĂ€hlen, unter anderem, Alterungseffekte in Transistoren sowie ĂŒbermĂ€ĂŸige Hitzeentwicklung, nicht zuletzt durch stĂ€rkeres Auftreten von Selbsterhitzungseffekten innerhalb der Transistoren. Damit solche Probleme die ZuverlĂ€ssigkeit eines Schaltkreises nicht gefĂ€hrden, werden die internen Signallaufzeiten ĂŒblicherweise sehr pessimistisch kalkuliert. Durch den so entstandenen zeitlichen Sicherheitsabstand wird die korrekte FunktionalitĂ€t des Schaltkreises sichergestellt, allerdings auf Kosten der Performance. Alternativ kann die ZuverlĂ€ssigkeit des Schaltkreises auch durch andere Techniken erhöht werden, wie zum Beispiel durch Null-Temperatur-Koeffizienten oder Approximate Computing. Wenngleich diese Techniken einen Großteil des ĂŒblichen zeitlichen Sicherheitsabstandes einsparen können, bergen sie dennoch weitere Konsequenzen und Kompromisse. Bleibende Herausforderungen bei der Skalierung von CMOS Technologien fĂŒhren außerdem zu einem verstĂ€rkten Fokus auf vielversprechende Zukunftstechnologien. Ein Beispiel dafĂŒr ist der Negative Capacitance Field-Effect Transistor (NCFET), der eine beachtenswerte Leistungssteigerung gegenĂŒber herkömmlichen FinFET Transistoren aufweist und diese in Zukunft ersetzen könnte. Des Weiteren setzen Entwickler von Schaltkreisen vermehrt auf komplexe, parallele Strukturen statt auf höhere Taktfrequenzen. Diese komplexen Modelle benötigen moderne Power-Management Techniken in allen Aspekten des Designs. Mit dem Auftreten von neuartigen Transistortechnologien (wie zum Beispiel NCFET) mĂŒssen diese Power-Management Techniken neu bewertet werden, da sich AbhĂ€ngigkeiten und VerhĂ€ltnismĂ€ĂŸigkeiten Ă€ndern. Diese Arbeit prĂ€sentiert neue Herangehensweisen, sowohl zur Analyse als auch zur Modellierung der ZuverlĂ€ssigkeit von Schaltkreisen, um zuvor genannte Herausforderungen auf mehreren Designebenen anzugehen. Diese Herangehensweisen unterteilen sich in konventionelle Techniken ((a), (b), (c) und (d)) und unkonventionelle Techniken ((e) und (f)), wie folgt: (a)\textbf{(a)} Analyse von Leistungszunahmen in Zusammenhang mit der Maximierung von Leistungseffizienz beim Betrieb nahe der Transistor Schwellspannung, insbesondere am optimalen Leistungspunkt. Das genaue Ermitteln eines solchen optimalen Leistungspunkts ist eine besondere Herausforderung bei Multicore Designs, da dieser sich mit den jeweiligen Optimierungszielsetzungen und der Arbeitsbelastung verschiebt. (b)\textbf{(b)} Aufzeigen versteckter Interdependenzen zwischen Alterungseffekten bei Transistoren und Schwankungen in der Versorgungsspannung durch „IR-drops“. Eine neuartige Technik wird vorgestellt, die sowohl Über- als auch UnterschĂ€tzungen bei der Ermittlung des zeitlichen Sicherheitsabstands vermeidet und folglich den kleinsten, dennoch ausreichenden Sicherheitsabstand ermittelt. (c)\textbf{(c)} EindĂ€mmung von Alterungseffekten bei Transistoren durch „Graceful Approximation“, eine Technik zur Erhöhung der Taktfrequenz bei Bedarf. Der durch Alterungseffekte bedingte zeitlich Sicherheitsabstand wird durch Approximate Computing Techniken ersetzt. Des Weiteren wird Quantisierung verwendet um ausreichend Genauigkeit bei den Berechnungen zu gewĂ€hrleisten. (d)\textbf{(d)} EindĂ€mmung von temperaturabhĂ€ngigen Verschlechterungen der Signallaufzeit durch den Betrieb nahe des Null-Temperatur Koeffizienten (N-ZTC). Der Betrieb bei N-ZTC minimiert temperaturbedingte Abweichungen der Performance und der Leistungsaufnahme. Qualitative und quantitative Vergleiche gegenĂŒber dem traditionellen zeitlichen Sicherheitsabstand werden prĂ€sentiert. (e)\textbf{(e)} Modellierung von Power-Management Techniken fĂŒr NCFET-basierte Prozessoren. Die NCFET Technologie hat einzigartige Eigenschaften, durch die herkömmliche Verfahren zur Spannungs- und Frequenzskalierungen zur Laufzeit (DVS/DVFS) suboptimale Ergebnisse erzielen. Dies erfordert NCFET-spezifische Power-Management Techniken, die in dieser Arbeit vorgestellt werden. (f)\textbf{(f)} Vorstellung eines neuartigen heterogenen Multicore Designs in NCFET Technologie. Das Design beinhaltet identische Kerne; HeterogenitĂ€t entsteht durch die Anwendung der individuellen, optimalen Konfiguration der Kerne. Amdahls Gesetz wird erweitert, um neue system- und anwendungsspezifische Parameter abzudecken und die VorzĂŒge des neuen Designs aufzuzeigen. Die Auswertungen der vorgestellten Techniken werden mithilfe von Implementierungen und Simulationen auf Schaltkreisebene (gate-level) durchgefĂŒhrt. Des Weiteren werden Simulatoren auf Systemebene (system-level) verwendet, um Multicore Designs zu implementieren und zu simulieren. Zur Validierung und Bewertung der EffektivitĂ€t gegenĂŒber dem Stand der Technik werden analytische, gate-level und system-level Simulationen herangezogen, die sowohl synthetische als auch reale Anwendungen betrachten

    Applications of MATLAB in Science and Engineering

    Get PDF
    The book consists of 24 chapters illustrating a wide range of areas where MATLAB tools are applied. These areas include mathematics, physics, chemistry and chemical engineering, mechanical engineering, biological (molecular biology) and medical sciences, communication and control systems, digital signal, image and video processing, system modeling and simulation. Many interesting problems have been included throughout the book, and its contents will be beneficial for students and professionals in wide areas of interest

    A Network-based Asynchronous Architecture for Cryptographic Devices

    Get PDF
    Institute for Computing Systems ArchitectureThe traditional model of cryptography examines the security of the cipher as a mathematical function. However, ciphers that are secure when specified as mathematical functions are not necessarily secure in real-world implementations. The physical implementations of ciphers can be extremely difficult to control and often leak socalled side-channel information. Side-channel cryptanalysis attacks have shown to be especially effective as a practical means for attacking implementations of cryptographic algorithms on simple hardware platforms, such as smart-cards. Adversaries can obtain sensitive information from side-channels, such as the timing of operations, power consumption and electromagnetic emissions. Some of the attack techniques require surprisingly little side-channel information to break some of the best known ciphers. In constrained devices, such as smart-cards, straightforward implementations of cryptographic algorithms can be broken with minimal work. Preventing these attacks has become an active and a challenging area of research. Power analysis is a successful cryptanalytic technique that extracts secret information from cryptographic devices by analysing the power consumed during their operation. A particularly dangerous class of power analysis, differential power analysis (DPA), relies on the correlation of power consumption measurements. It has been proposed that adding non-determinism to the execution of the cryptographic device would reduce the danger of these attacks. It has also been demonstrated that asynchronous logic has advantages for security-sensitive applications. This thesis investigates the security and performance advantages of using a network-based asynchronous architecture, in which the functional units of the datapath form a network. Non-deterministic execution is achieved by exploiting concurrent execution of instructions both with and without data-dependencies; and by forwarding register values between instructions with data-dependencies using randomised routing over the network. The executions of cryptographic algorithms on different architectural configurations are simulated, and the obtained power traces are subjected to DPA attacks. The results show that the proposed architecture introduces a level of non-determinism in the execution that significantly raises the threshold for DPA attacks to succeed. In addition, the performance analysis shows that the improved security does not degrade performance

    Marine Engines Performance and Emissions

    Get PDF
    This book contains a collection of peer-review scientific papers about marine engines’ performance and emissions. These papers were carefully selected for the “Marine Engines Performance and Emissions” Special Issue of the Journal of Marine Science and Engineering. Recent advancements in engine technology have allowed designers to reduce emissions and improve performance. Nevertheless, further efforts are needed to comply with the ever increased emission legislations. This book was conceived for people interested in marine engines. This information concerning recent developments may be helpful to academics, researchers, and professionals engaged in the field of marine engineering

    New innovations in pavement materials and engineering: A review on pavement engineering research 2021

    Get PDF
    Sustainable and resilient pavement infrastructure is critical for current economic and environmental challenges. In the past 10 years, the pavement infrastructure strongly supports the rapid development of the global social economy. New theories, new methods, new technologies and new materials related to pavement engineering are emerging. Deterioration of pavement infrastructure is a typical multi-physics problem. Because of actual coupled behaviors of traffic and environmental conditions, predictions of pavement service life become more and more complicated and require a deep knowledge of pavement material analysis. In order to summarize the current and determine the future research of pavement engineering, Journal of Traffic and Transportation Engineering (English Edition) has launched a review paper on the topic of “New innovations in pavement materials and engineering: A review on pavement engineering research 2021”. Based on the joint-effort of 43 scholars from 24 well-known universities in highway engineering, this review paper systematically analyzes the research status and future development direction of 5 major fields of pavement engineering in the world. The content includes asphalt binder performance and modeling, mixture performance and modeling of pavement materials, multi-scale mechanics, green and sustainable pavement, and intelligent pavement. Overall, this review paper is able to provide references and insights for researchers and engineers in the field of pavement engineering

    Development of tangible acoustic interfaces for human computer interaction

    Get PDF
    Tangible interfaces, such as keyboards, mice, touch pads, and touch screens, are widely used in human computer interaction. A common disadvantage with these devices is the presence of mechanical or electronic devices at the point of interaction with the interface. The aim of this work has been to investigate and develop new tangible interfaces that can be adapted to virtually any surface, by acquiring and studying the acoustic vibrations produced by the interaction of the user's finger on the surface. Various approaches have been investigated in this work, including the popular time difference of arrival (TDOA) method, time-frequency analysis of dispersive velocities, the time reversal method, and continuous object tracking. The received signal due to a tap at a source position can be considered the impulse response function of the wave propagation between the source and the receiver. With the time reversal theory, the signals induced by impacts from one position contain the unique and consistent information that forms its signature. A pattern matching method, named Location Template Matching (LTM), has been developed to identify the signature of the received signals from different individual positions. Various experiments have been performed for different purposes, such as consistency testing, acquisition configuration, and accuracy of recognition. Eventually, this can be used to implement HCI applications on any arbitrary surfaces, including those of 3D objects and inhomogeneous materials. The resolution with the LTM method has been studied by different experiments, investigating factors such as optimal sensor configurations and the limitation of materials. On plates of the same material, the thickness is the essential determinant of resolution. With the knowledge of resolution for one material, a simple but faster search method becomes feasible to reduce the computation. Multiple simultaneous impacts are also recognisable in certain cases. The TDOA method has also been evaluated with two conventional approaches. Taking into account the dispersive properties of the vibration propagation in plates, time-frequency analysis, with continuous wavelet transformation, has been employed for the accurate localising of dispersive signals. In addition, a statistical estimation of maximum likelihood has been developed to improve the accuracy and reliability of acoustic localisation. A method to measure and verify the dispersive velocities has also been introduced. To enable the commonly required "drag & drop" function in the operation of graphical user interface (GUI) software, the tracking of a finger scratching on a surface needs to be implemented. To minimise the tracking error, a priori knowledge of previous measurements of source locations is needed to linearise the state model that enables prediction of the location of the contact point and the direction of movement. An adaptive Kalman filter has been used for this purpose
    corecore