2,978 research outputs found
TDRSS/user satellite timing study
A timing analysis for data readout through the Tracking and Data Relay Satellite System (TDRSS) was presented. Various time tagging approaches were considered and the resulting accuracies delineated. The TDRSS was also defined and described in detail
Analytical techniques: A compilation
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques
Digital Signal Processing (Second Edition)
This book provides an account of the mathematical background, computational methods and software engineering associated with digital signal processing. The aim has been to provide the reader with the mathematical methods required for signal analysis which are then used to develop models and algorithms for processing digital signals and finally to encourage the reader to design software solutions for Digital Signal Processing (DSP). In this way, the reader is invited to develop a small DSP library that can then be expanded further with a focus on his/her research interests and applications.
There are of course many excellent books and software systems available on this subject area. However, in many of these publications, the relationship between the mathematical methods associated with signal analysis and the software available for processing data is not always clear. Either the publications concentrate on mathematical aspects that are not focused on practical programming solutions or elaborate on the software development of solutions in terms of working ‘black-boxes’ without covering the mathematical background and analysis associated with the design of these software solutions. Thus, this book has been written with the aim of giving the reader a technical overview of the mathematics and software associated with the ‘art’ of developing numerical algorithms and designing software solutions for DSP, all of which is built on firm mathematical foundations. For this reason, the work is, by necessity, rather lengthy and covers a wide range of subjects compounded in four principal parts. Part I provides the mathematical background for the analysis of signals, Part II considers the computational techniques (principally those associated with linear algebra and the linear eigenvalue problem) required for array processing and associated analysis (error analysis for example). Part III introduces the reader to the essential elements of software engineering using the C programming language, tailored to those features that are used for developing C functions or modules for building a DSP library.
The material associated with parts I, II and III is then used to build up a DSP system by defining a number of ‘problems’ and then addressing the solutions in terms of presenting an appropriate mathematical model, undertaking the necessary analysis, developing an appropriate algorithm and then coding the solution in C. This material forms the basis for part IV of this work.
In most chapters, a series of tutorial problems is given for the reader to attempt with answers provided in Appendix A. These problems include theoretical, computational and programming exercises. Part II of this work is relatively long and arguably contains too much material on the computational methods for linear algebra. However, this material and the complementary material on vector and matrix norms forms the computational basis for many methods of digital signal processing. Moreover, this important and widely researched subject area forms the foundations, not only of digital signal processing and control engineering for example, but also of numerical analysis in general.
The material presented in this book is based on the lecture notes and supplementary material developed by the author for an advanced Masters course ‘Digital Signal Processing’ which was first established at Cranfield University, Bedford in 1990 and modified when the author moved to De Montfort University, Leicester in 1994. The programmes are still operating at these universities and the material has been used by some 700++ graduates since its establishment and development in the early 1990s. The material was enhanced and developed further when the author moved to the Department of Electronic and Electrical Engineering at Loughborough University in 2003 and now forms part of the Department’s post-graduate programmes in Communication Systems Engineering. The original Masters programme included a taught component covering a period of six months based on two semesters, each Semester being composed of four modules. The material in this work covers the first Semester and its four parts reflect the four modules delivered. The material delivered in the second Semester is published as a companion volume to this work entitled Digital Image Processing, Horwood Publishing, 2005 which covers the mathematical modelling of imaging systems and the techniques that have been developed to process and analyse the data such systems provide.
Since the publication of the first edition of this work in 2003, a number of minor changes and some additions have been made. The material on programming and software engineering in Chapters 11 and 12 has been extended. This includes some additions and further solved and supplementary questions which are included throughout the text. Nevertheless, it is worth pointing out, that while every effort has been made by the author and publisher to provide a work that is error free, it is inevitable that typing errors and various ‘bugs’ will occur. If so, and in particular, if the reader starts to suffer from a lack of comprehension over certain aspects of the material (due to errors or otherwise) then he/she should not assume that there is something wrong with themselves, but with the author
Communication Efficient Algorithms for Generating Massive Networks
Massive complex systems are prevalent throughout all of our lives, from various biological
systems as the human genome to technological networks such as Facebook or Twitter.
Rapid advances in technology allow us to gather more and more data that is connected to
these systems. Analyzing and extracting this huge amount of information is a crucial task
for a variety of scientific disciplines.
A common abstraction for handling complex systems are networks (graphs) made up of
entities and their relationships. For example, we can represent wireless ad hoc networks in
terms of nodes and their connections with each other.We then identify the nodes as vertices
and their connections as edges between the vertices. This abstraction allows us to develop
algorithms that are independent of the underlying domain.
Designing algorithms for massive networks is a challenging task that requires thorough
analysis and experimental evaluation. A major hurdle for this task is the scarcity of publicly
available large-scale datasets. To approach this issue, we can make use of network generators
[21]. These generators allow us to produce synthetic instances that exhibit properties
found in many real-world networks.
In this thesis we develop a set of novel graph generators that have a focus on scalability.
In particular, we cover the classic Erd˝os-Rényi model, random geometric graphs and
random hyperbolic graphs. These models represent different real-world systems, from the
aforementioned wireless ad-hoc networks [40] to social networks [44].We ensure scalability
by making use of pseudorandomization via hash functions and redundant computations.
The resulting network generators are communication agnostic, i.e. they require no communication.
This allows us to generate massive instances of up to 243 vertices and 247 edges
in less than 22 minutes on 32:768 processors.
In addition to proving theoretical bounds for each generator, we perform an extensive
experimental evaluation. We cover both their sequential performance, as well as scaling
behavior.We are able to show that our algorithms are competitive to state-of-the-art implementations
found in network analysis libraries. Additionally, our generators exhibit near
optimal scaling behavior for large instances. Finally, we show that pseudorandomization
has little to no measurable impact on the quality of our generated instances
Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations
Although double-precision floating-point arithmetic currently dominates
high-performance computing, there is increasing interest in smaller and simpler
arithmetic types. The main reasons are potential improvements in energy
efficiency and memory footprint and bandwidth. However, simply switching to
lower-precision types typically results in increased numerical errors. We
investigate approaches to improving the accuracy of reduced-precision
fixed-point arithmetic types, using examples in an important domain for
numerical computation in neuroscience: the solution of Ordinary Differential
Equations (ODEs). The Izhikevich neuron model is used to demonstrate that
rounding has an important role in producing accurate spike timings from
explicit ODE solution algorithms. In particular, fixed-point arithmetic with
stochastic rounding consistently results in smaller errors compared to single
precision floating-point and fixed-point arithmetic with round-to-nearest
across a range of neuron behaviours and ODE solvers. A computationally much
cheaper alternative is also investigated, inspired by the concept of dither
that is a widely understood mechanism for providing resolution below the least
significant bit (LSB) in digital signal processing. These results will have
implications for the solution of ODEs in other subject areas, and should also
be directly relevant to the huge range of practical problems that are
represented by Partial Differential Equations (PDEs).Comment: Submitted to Philosophical Transactions of the Royal Society
Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme
Parallel tempering (PT) methods are a popular class of Markov chain Monte
Carlo schemes used to sample complex high-dimensional probability
distributions. They rely on a collection of interacting auxiliary chains
targeting tempered versions of the target distribution to improve the
exploration of the state-space. We provide here a new perspective on these
highly parallel algorithms and their tuning by identifying and formalizing a
sharp divide in the behaviour and performance of reversible versus
non-reversible PT schemes. We show theoretically and empirically that a class
of non-reversible PT methods dominates its reversible counterparts and identify
distinct scaling limits for the non-reversible and reversible schemes, the
former being a piecewise-deterministic Markov process and the latter a
diffusion. These results are exploited to identify the optimal annealing
schedule for non-reversible PT and to develop an iterative scheme approximating
this schedule. We provide a wide range of numerical examples supporting our
theoretical and methodological contributions. The proposed methodology is
applicable to sample from a distribution with a density with respect
to a reference distribution and compute the normalizing constant. A
typical use case is when is a prior distribution, a likelihood
function and the corresponding posterior.Comment: 74 pages, 30 figures. The method is implemented in an open source
probabilistic programming available at
https://github.com/UBC-Stat-ML/blangSD
Quasi-Monte Carlo in finance: extending for problems of high effective dimension
Neste artigo mostramos que é possível usar métodos de simulação quase-Monte Carlo em problemas de alta dimensão efetiva. Isto é feito por meio de uma combinação de uma cuidadosa construção das seqüências de Sobol e de uma decomposição apropriada da matriz de covariância dos fatores de risco. A eficácia do método é ilustrada por meio da precificação de opções que envolve a solução de problemas com dimensão nominal da ordem de 550 (e dimensão efetiva da ordem de 300). Acreditamos que o método apresentado seja de fácil implementação e de grande interesse para os participantes do mercado financeiro.In this paper we show that it is possible to extend the use of quasi-Monte Carlo for applications of high effective dimension. This is achieved through a combination of a careful construction of the Sobol sequence and an appropriately chosen decomposition of a covariance matrix. The effectiveness of this procedure is demonstrated as we price average options with nominal dimensions ranging up to 550 (effective dimension around 300). We believe the method we present is easy to implement and should be of great interest to practitioners
- …