199 research outputs found
Analysis of Chaos-Based Coded Modulations under Intersymbol Interference
Ministerio de Educación y CienciaMinisterio de Ciencia e InnovaciónMinisterio de IndustriaComunidad de Madri
Continuous-time analog circuits for statistical signal processing
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.Vita.Includes bibliographical references (p. 205-209).This thesis proposes an alternate paradigm for designing computers using continuous-time analog circuits. Digital computation sacrifices continuous degrees of freedom. A principled approach to recovering them is to view analog circuits as propagating probabilities in a message passing algorithm. Within this framework, analog continuous-time circuits can perform robust, programmable, high-speed, low-power, cost-effective, statistical signal processing. This methodology will have broad application to systems which can benefit from low-power, high-speed signal processing and offers the possibility of adaptable/programmable high-speed circuitry at frequencies where digital circuitry would be cost and power prohibitive. Many problems must be solved before the new design methodology can be shown to be useful in practice: Continuous-time signal processing is not well understood. Analog computational circuits known as "soft-gates" have been previously proposed, but a complementary set of analog memory circuits is still lacking. Analog circuits are usually tunable, rarely reconfigurable, but never programmable. The thesis develops an understanding of the convergence and synchronization of statistical signal processing algorithms in continuous time, and explores the use of linear and nonlinear circuits for analog memory. An exemplary embodiment called the Noise Lock Loop (NLL) using these design primitives is demonstrated to perform direct-sequence spread-spectrum acquisition and tracking functionality and promises order-of-magnitude wins over digital implementations. A building block for the construction of programmable analog gate arrays, the "soft-multiplexer" is also proposed.by Benjamin Vigoda.Ph.D
High-Dimensional Information Detection based on Correlation Imaging Theory
Radar is a device that uses electromagnetic(EM) waves to detect targets; it can measure the position
parameters and motion parameters and extract target characteristics information by analyzing the
reflected signal from the target. From the perspective of the radar theoretical basis of physics, the
more than 70 years of development of radar are based on the EM field fluctuation theory of physics.
Many theories have been developed towards one-dimensional signal processing. For example, a
variety of threshold filtering have widely used as methods to resist interference during detection. The
optimal state estimation describes the propagation process of the statistical characteristics of the
target over time in the probability domain. Compressed sensing greatly improves the reconstructing
efficiency of the sparse signal. These theories are one-dimensional information processing. The
information obtained by them is a deterministic description of the EM field. The correlated imaging
technique is from the high-order coherence property of the EM field, which uses the fluctuation
characteristic of the EM field to realize non-local imaging. Correlated imaging radar, a combination of
correlated imaging techniques and modern information theory, will provide a novel remote sensing
detection and imaging method. More importantly, correlated imaging radar is a new research field.
Therefore, a complete theoretical frame and application system should be urgently built up and
improved.
Based on the coherence theory of the EM field, the work in this thesis explores the method of
determining the statistical characteristics of the EM field so that the high dimensional target
information can be detected, including theoretical analysis, principle design, imaging modes, target
detecting models, image reconstruction algorithms, the enhancement of visibility, and system design.
The simulations and real experiments are set up to prove the theory's validity and the systems'
feasibility
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
Neurosciences and Wireless Networks: The Potential of Brain-Type Communications and Their Applications
This paper presents the first comprehensive tutorial on a promising research field located at the frontier of two well-established domains, neurosciences and wireless communications, motivated by the ongoing efforts to define the Sixth Generation of Mobile Networks (6G). In particular, this tutorial first provides a novel integrative approach that bridges the gap between these two seemingly disparate fields. Then, we present the state-of-the-art and key challenges of these two topics. In particular, we propose a novel systematization that divides the contributions into two groups, one focused on what neurosciences will offer to future wireless technologies in terms of new applications and systems architecture (Neurosciences for Wireless Networks), and the other on how wireless communication theory and next-generation wireless systems can provide new ways to study the brain (Wireless Networks for Neurosciences). For the first group, we explain concretely how current scientific understanding of the brain would enable new applications within the context of a new type of service that we dub brain-type communications and that has more stringent requirements than human- and machine-type communication. In this regard, we expose the key requirements of brain-type communication services and discuss how future wireless networks can be equipped to deal with such services. Meanwhile, for the second group, we thoroughly explore modern communication systems paradigms, including Internet of Bio-Nano Things and wireless-integrated brain-machine interfaces, in addition to highlighting how complex systems tools can help bridging the upcoming advances of wireless technologies and applications of neurosciences. Brain-controlled vehicles are then presented as our case study to demonstrate for both groups the potential created by the convergence of neurosciences and wireless communications, probably in 6G. In summary, this tutorial is expected to provide a largely missing articulation between neurosciences and wireless communications while delineating concrete ways to move forward in such an interdisciplinary endeavor
- …