39 research outputs found
Optimization of LiDAR MEMS Data Computation for Point Cloud Creation
The goal of this project is to conceptualize and design a digital signal processing system for a LiDAR sensor to be applied in the automotive industry. The way the sensor works and how it influences and interfaces with the microcontroller, where the processing will take place, must be properly analyzed. Then, the right algorithms must be chosen in a justifiable way and the system that performs them has to be designed. This system must operate under strict real-time restrictions and obey to performance and cost requirements
System architecture study of an orbital GPS user terminal
The generic RF and applications processing requirements for a GPS orbital navigator are considered. A line of demarcation between dedicated analog hardware, and software/processor implementation, maximizing the latter is discussed. A modular approach to R/PA design which permits several varieties of receiver to be constructed from basic components is described. It is a basic conclusion that software signal processing of the output of the baseband correlator is the best choice of transition from analog to digital signal processing. High performance sets requiring multiple channels are developed from a generic design by replicating the RF processing segment, and modifying the applications software to provide enhanced state propagation and estimation
Recommended from our members
Novel Computing Paradigms using Oscillators
This dissertation is concerned with new ways of using oscillators to perform computational tasks. Specifically, it introduces methods for building finite state machines (for general-purpose Boolean computation) as well as Ising machines (for solving combinatorial optimization problems) using coupled oscillator networks.But firstly, why oscillators? Why use them for computation?An important reason is simply that oscillators are fascinating. Coupled oscillator systems often display intriguing synchronization phenomena where spontaneous patterns arise. From the synchronous flashing of fireflies to Huygens' clocks ticking in unison, from the molecular mechanism of circadian rhythms to the phase patterns in oscillatory neural circuits, the observation and study of synchronization in coupled oscillators has a long and rich history. Engineers across many disciplines have also taken inspiration from these phenomena, e.g., to design high-performance radio frequency communication circuits and optical lasers. To be able to contribute to the study of coupled oscillators and leverage them in novel paradigms of computing is without question an interesting andfulfilling quest in and of itself.Moreover, as Moore's Law nears its limits, new computing paradigms that are different from mere conventional complementary metal鈥搊xide鈥搒emiconductor (CMOS) scaling have become an important area of exploration. One broad direction aims to improve CMOS performance using device technology such as fin field-effect transistors (FinFET) and gate-all-around (GAA) FETs. Other new computing schemes are based on non-CMOS material and device technology, e.g., graphene, carbon nanotubes, memristive devices, optical devices, etc.. Another growing trend in both academia and industry is to build digital application-specific integrated circuits (ASIC) suitable for speeding up certain computational tasks, often leveraging the parallel nature of unconventional non-von Neumann architectures. These schemes seek to circumvent the limitations posed at the device level through innovations at the system/architecture level.Our work on oscillator-based computation represents a direction that is different from the above and features several points of novelty and attractiveness. Firstly, it makes meaningful use of nonlinear dynamical phenomena to tackle well-defined computational tasks that span analog and digital domains. It also differs from conventional computational systems at the fundamental logic encoding level, using timing/phase of oscillation as opposed to voltage levels to represent logic values. These differences bring about several advantages. The change of logic encoding scheme has several device- and system-level benefits related to noise immunity and interference resistance. The use of nonlinear oscillator dynamics allows our systems to address problems difficult for conventional digital computation. Furthermore, our schemes are amenable to realizations using almost all types of oscillators, allowing a wide variety of devices from multiple physical domains to serve as the substrate for computing. This ability to leverage emerging multiphysics devices need not put off the realization of our ideas far into the future. Instead, implementations using well-established circuit technology are already both practical and attractive.This work also differs from all past work on oscillator-based computing, which mostly focuses on specialized image preprocessing tasks, such as edge detection, image segmentation and pattern recognition. Perhaps its most unique feature is that our systems use transitions between analog and digital modes of operation --- unlike other existing schemes that simply couple oscillators and let their phases settle to a continuum of values, we use a special type of injection locking to make each oscillator settle to one of the several well-defined multistable phase-locked states, which we use to encode logic values for computation. Our schemes of oscillator-based Boolean and Ising computation are built upon this digitization of phase; they expand the scope of oscillator-based computing significantly.Our ideas are built on years of past research in the modelling, simulation and analysis of oscillators. While there is a considerable amount of literature (arguably since Christiaan Huygens wrote about his observation of synchronized pendulum clocks in the 17th century) analyzing the synchronization phenomenon from different perspectives at different levels, we have been able to further develop the theory of injection locking, connecting the dots to find a path of analysis that starts from the low-level differential equations of individual oscillators and arrives at phase-based models and energy landscapes of coupled oscillator systems. This theoretical scaffolding is able not only to explain the operation of oscillator-based systems, but also to serve as the basis for simulation and design tools. Building on this, we explore the practical design of our proposed systems, demonstrate working prototypes, as well as develop the techniques, tools and methodologies essential for the process
Southwest Research Institute assistance to NASA in biomedical areas of the technology utilization program
The activities are reported of the NASA Biomedical Applications Team at Southwest Research Institute between 25 August, 1972 and 15 November, 1973. The program background and methodology are discussed along with the technology applications, and biomedical community impacts
The application of digital techniques to an automatic radar track extraction system
'Modern' radar systems have come in for much criticism in recent years, particularly in the aftermath of the Falklands campaign. There have also been notable failures in commercial designs, including the well-publicised 'Nimrod' project which was abandoned due to persistent inability to meet signal processing requirements. There is clearly a need for improvement in radar signal processing techniques as many designs rely on technology dating from the late 1970's, much of which is obsolete by today鈥檚 standards. The Durham Radar Automatic Track Extraction System (RATES) is a practical implementation of current microprocessor technology, applied to plot extraction of surveillance radar data. In addition to suggestions for the design of such a system, results are quoted for the predicted performance when compared with a similar product using 1970's design methodology. Suggestions are given for the use of other VLSI techniques in plot extraction, including logic arrays and digital signal processors. In conclusion, there is an illustrated discussion concerning the use of systolic arrays in RATES and a prediction that this will represent the optimum architecture for future high-speed radar signal processors
Low power, reduced complexity filtering and improved tracking accuracy for GNSS
This thesis addresses the power consumption problems resulting from the advent of multiple GNSS satellite systems which create the need for receivers supporting multi-frequency, multi-constellation GNSS systems. Such a multi-mode receiver requires a substantial amount of signal processing power which translates to increased hardware complexity and higher power dissipation which reduces the battery life of a mobile platform. During the course of the work undertaken, a power analysis tool was developed in order to be able to estimate the hardware utilisation as well as the power consumption
of a digital system. By using the power estimation tool developed, it was established that most of the power was dissipated after the Analog to Digital Converter (ADC)by the filters associated with the decimation process. The power dissipation and the hardware complexity of the decimator can be reduced substantially by using a minimum-phase Infinite Impulse Response (IIR) filter. For Global Positioning System
(GPS) civilian signals, the use of IIR filters does not deleteriously affect the positional accuracy. However, in the case where an IIR filter was deployed in a GLObalnaya NAvigatsionnaya Sputnikovaya Sistema (GLONASS) receiver, the pseudorange measurements of the receiver varied by up to 200 metres. The work undertaken proposes various methods that overcomes the pseudorange measurement variation and reports on the results that are on par with linear-phase Finite Impulse Response (FIR) filters. The work also proposes a modified tracking loop that is capable of tracking very low Doppler frequencies without decreasing the tracking performance
Liquid stream processing on the web: a JavaScript framework
The Web is rapidly becoming a mature platform to host distributed applications. Pervasive computing application running on the Web are now common in the era of the Web of Things, which has made it increasingly simple to integrate sensors and microcontrollers in our everyday life. Such devices are of great in- terest to Makers with basic Web development skills. With them, Makers are able to build small smart stream processing applications with sensors and actuators without spending a fortune and without knowing much about the technologies they use. Thanks to ongoing Web technology trends enabling real-time peer-to- peer communication between Web-enabled devices, Web browsers and server- side JavaScript runtimes, developers are able to implement pervasive Web ap- plications using a single programming language. These can take advantage of direct and continuous communication channels going beyond what was possible in the early stages of the Web to push data in real-time. Despite these recent advances, building stream processing applications on the Web of Things remains a challenging task. On the one hand, Web-enabled devices of different nature still have to communicate with different protocols. On the other hand, dealing with a dynamic, heterogeneous, and volatile environment like the Web requires developers to face issues like disconnections, unpredictable workload fluctuations, and device overload. To help developers deal with such issues, in this dissertation we present the Web Liquid Streams (WLS) framework, a novel streaming framework for JavaScript. Developers implement streaming operators written in JavaScript and may interactively and dynamically define a streaming topology. The framework takes care of deploying the user-defined operators on the available devices and connecting them using the appropriate data channel, removing the burden of dealing with different deployment environments from the developers. Changes in the semantic of the application and in its execution environment may be ap- plied at runtime without stopping the stream flow. Like a liquid adapts its shape to the one of its container, the Web Liquid Streams framework makes streaming topologies flow across multiple heterogeneous devices, enabling dynamic operator migration without disrupting the data flow. By constantly monitoring the execution of the topology with a hierarchical controller infrastructure, WLS takes care of parallelising the operator execution across multiple devices in case of bottlenecks and of recovering the execution of the streaming topology in case one or more devices disconnect, by restarting lost operators on other available devices
The development of an in-vivo method for assessing the antithrombotic properties of pharmaceutical compounds
The formation of a thrombus stems from the malfunction of a normal
physiological function referred to as haemostasis and the activity of
blood platelets; such thrombi give rise to debilitating and often fatal
strokes. Consequently much effort is associated with the search for
pharmacological compounds capable of their prevention or dispersion. 路
Most of the primary screens associated with such work rely on in-vitro
tests and in separating the blood from it's vasculature, the influence
and results associated with several naturally occuring moderators may be
lost. There therefore exists the incentive to develop more
representative in-vivo screening methods.
Following an introduction to the underlying physiology and pharmacology
and a review of established screening methods, this thesis proceeds to
describe the development of a novel technique suitable for such in-vivo
studies. It's inception is shown to be a consequence of an amalgamation
of ultrasonic methods associated with the clinical detection of
occlusions and laser Doppler velocimetry. Both topics are individually
surveyed and then brought together through a concept whereby the
efficacy of compounds might be evaluated in animal models by measuring
the velocity of blood in the fluid jet formed distal to an induced
thrombus.The main underlying assumption is that the jet velocity will
reflect the degree of encroachment of the thrombus into the vasculature.
In accord with the evolved measurement rationale there then follows a
description of a specific laser Doppler velocimeter and some associated
experiments, designed to qualitatively appraise the validity of the
underlying assumptions. The ensuing results in turn give rise to the
design of a laser Doppler microscope, an analyser for extracting the
required velocity information from the Doppler shift spectrum and an
additional series of experiments. Central to this latter stage of
validation is the use of a thrombus analogue in a narrow bored glass
flow tube. Finally, some preliminary in-vivo experiments and results are
presented