714 research outputs found
A rigorous exposition of the LEMMA method for analog and mixed-signal testing.
The linear error-mechanism modeling technique is an effective tool for testing analog and mixed-signal devices which minimizes the number of measurements required to characterize the static transfer function of a circuit by determining a small number of parameters of a linear error model and then predicting the entire response error. This work focuses on optimizing the linear error-mechanism model algorithm (LEMMA), introducing novel refinements which are shown to improve its performance significantly. We outline the implementation of the algorithm in a tutorial manner, paying due consideration to the underlying theory where required
Developing Model-Based Design Evaluation for Pipelined A/D Converters
This paper deals with a prospective approach of modeling, design evaluation and error determination applied to pipelined A/D converter architecture. In contrast with conventional ADC modeling algorithms targeted to extract the maximum ADC non-linearity error, the innovative approach presented allows to decompose magnitudes of individual error sources from a measured or simulated response of an ADC device. Design Evaluation methodology was successfully applied to Nyquist rate cyclic converters in our works [13]. Now, we extend its principles to pipelined architecture. This qualitative decomposition can significantly contribute to the ADC calibration procedure performed on the production line in term of integral and differential nonlinearity. This is backgrounded by the fact that the knowledge of ADC performance contributors provided by the proposed method helps to adjust the values of on-chip converter components so as to equalize (and possibly minimize) the total non-linearity error. In this paper, the design evaluation procedure is demonstrated on a system design example of pipelined A/D converter. Significant simulation results of each stage of the design evaluation process are given, starting from the INL performance extraction proceeded in a powerful Virtual Testing Environment implemented in Maple™ software and finishing by an error source simulation, modeling of pipelined ADC structure and determination of error source contribution, suitable for a generic process flow
Computation in Economics
This is an attempt at a succinct survey, from methodological and epistemological perspectives, of the burgeoning, apparently unstructured, field of what is often – misleadingly – referred to as computational economics. We identify and characterise four frontier research fields, encompassing both micro and macro aspects of economic theory, where machine computation play crucial roles in formal modelling exercises: algorithmic behavioural economics, computable general equilibrium theory, agent based computational economics and computable economics. In some senses these four research frontiers raise, without resolving, many interesting methodological and epistemological issues in economic theorising in (alternative) mathematical modesClassical Behavioural Economics, Computable General Equilibrium theory, Agent Based Economics, Computable Economics, Computability, Constructivity, Numerical Analysis
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Exact and efficient solutions of the LMC Multitask Gaussian Process model
The Linear Model of Co-regionalization (LMC) is a very general model of
multitask gaussian process for regression or classification. While its
expressivity and conceptual simplicity are appealing, naive implementations
have cubic complexity in the number of datapoints and number of tasks, making
approximations mandatory for most applications. However, recent work has shown
that under some conditions the latent processes of the model can be decoupled,
leading to a complexity that is only linear in the number of said processes. We
here extend these results, showing from the most general assumptions that the
only condition necessary to an efficient exact computation of the LMC is a mild
hypothesis on the noise model. We introduce a full parametrization of the
resulting \emph{projected LMC} model, and an expression of the marginal
likelihood enabling efficient optimization. We perform a parametric study on
synthetic data to show the excellent performance of our approach, compared to
an unrestricted exact LMC and approximations of the latter. Overall, the
projected LMC appears as a credible and simpler alternative to state-of-the art
models, which greatly facilitates some computations such as leave-one-out
cross-validation and fantasization.Comment: 21 pages, 5 figures, submitted to AISTAT
Multi-Device Task-Oriented Communication via Maximal Coding Rate Reduction
Task-oriented communication offers ample opportunities to alleviate the
communication burden in next-generation wireless networks. Most existing work
designed the physical-layer communication modules and learning-based codecs
with distinct objectives: learning is targeted at accurate execution of
specific tasks, while communication aims at optimizing conventional
communication metrics, such as throughput maximization, delay minimization, or
bit error rate minimization. The inconsistency between the design objectives
may hinder the exploitation of the full benefits of task-oriented
communications. In this paper, we consider a specific task-oriented
communication system for multi-device edge inference over a multiple-input
multiple-output (MIMO) multiple-access channel, where the learning (i.e.,
feature encoding and classification) and communication (i.e., precoding)
modules are designed with the same goal of inference accuracy maximization.
Instead of end-to-end learning which involves both the task dataset and
wireless channel during training, we advocate a separate design of learning and
communication to achieve the consistent goal. Specifically, we leverage the
maximal coding rate reduction (MCR2) objective as a surrogate to represent the
inference accuracy, which allows us to explicitly formulate the precoding
optimization problem. We cast valuable insights into this formulation and
develop a block coordinate descent (BCD) solution algorithm. Moreover, the MCR2
objective also serves the loss function of the feature encoding network, based
on which we characterize the received features as a Gaussian mixture (GM)
model, facilitating a maximum a posteriori (MAP) classifier to infer the
result. Simulation results on both the synthetic and real-world datasets
demonstrate the superior performance of the proposed method compared to various
baselines.Comment: submitted to IEEE for possible publicatio
- …