13,069 research outputs found
An Implementation of List Successive Cancellation Decoder with Large List Size for Polar Codes
Polar codes are the first class of forward error correction (FEC) codes with
a provably capacity-achieving capability. Using list successive cancellation
decoding (LSCD) with a large list size, the error correction performance of
polar codes exceeds other well-known FEC codes. However, the hardware
complexity of LSCD rapidly increases with the list size, which incurs high
usage of the resources on the field programmable gate array (FPGA) and
significantly impedes the practical deployment of polar codes. To alleviate the
high complexity, in this paper, two low-complexity decoding schemes and the
corresponding architectures for LSCD targeting FPGA implementation are
proposed. The architecture is implemented in an Altera Stratix V FPGA.
Measurement results show that, even with a list size of 32, the architecture is
able to decode a codeword of 4096-bit polar code within 150 us, achieving a
throughput of 27MbpsComment: 4 pages, 4 figures, 4 tables, Published in 27th International
Conference on Field Programmable Logic and Applications (FPL), 201
Active data structures on GPGPUs
Active data structures support operations that may affect a large number of elements of an aggregate data structure. They are well suited for extremely fine grain parallel systems, including circuit parallelism. General purpose GPUs were designed to support regular graphics algorithms, but their intermediate level of granularity makes them potentially viable also for active data structures. We consider the characteristics of active data structures and discuss the feasibility of implementing them on GPGPUs. We describe the GPU implementations of two such data structures (ESF arrays and index intervals), assess their performance, and discuss the potential of active data structures as an unconventional programming model that can exploit the capabilities of emerging fine grain architectures such as GPUs
Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk
At the heart of the analytical pipeline of a modern quantitative
insurance/reinsurance company is a stochastic simulation technique for
portfolio risk analysis and pricing process referred to as Aggregate Analysis.
Support for the computation of risk measures including Probable Maximum Loss
(PML) and the Tail Value at Risk (TVAR) for a variety of types of complex
property catastrophe insurance contracts including Cat eXcess of Loss (XL), or
Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures
is obtained in Aggregate Analysis.
In this paper, we explore parallel methods for aggregate risk analysis. A
parallel aggregate risk analysis algorithm and an engine based on the algorithm
is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and
in C and CUDA for many-core GPUs. Performance analysis of the algorithm
indicates that GPUs offer an alternative HPC solution for aggregate risk
analysis that is cost effective. The optimised algorithm on the GPU performs a
1 million trial aggregate simulation with 1000 catastrophic events per trial on
a typical exposure set and contract structure in just over 20 seconds which is
approximately 15x times faster than the sequential counterpart. This can
sufficiently support the real-time pricing scenario in which an underwriter
analyses different contractual terms and pricing while discussing a deal with a
client over the phone.Comment: Proceedings of the Workshop at the International Conference for High
Performance Computing, Networking, Storage and Analysis (SC), 2012, 8 page
Data acquisition system for the MuLan muon lifetime experiment
We describe the data acquisition system for the MuLan muon lifetime
experiment at Paul Scherrer Institute. The system was designed to record muon
decays at rates up to 1 MHz and acquire data at rates up to 60 MB/sec. The
system employed a parallel network of dual-processor machines and repeating
acquisition cycles of deadtime-free time segments in order to reach the design
goals. The system incorporated a versatile scheme for control and diagnostics
and a custom web interface for monitoring experimental conditions.Comment: 19 pages, 8 figures, submitted to Nuclear Instruments and Methods
Practopoiesis: Or how life fosters a mind
The mind is a biological phenomenon. Thus, biological principles of
organization should also be the principles underlying mental operations.
Practopoiesis states that the key for achieving intelligence through adaptation
is an arrangement in which mechanisms laying a lower level of organization, by
their operations and interaction with the environment, enable creation of
mechanisms lying at a higher level of organization. When such an organizational
advance of a system occurs, it is called a traverse. A case of traverse is when
plasticity mechanisms (at a lower level of organization), by their operations,
create a neural network anatomy (at a higher level of organization). Another
case is the actual production of behavior by that network, whereby the
mechanisms of neuronal activity operate to create motor actions. Practopoietic
theory explains why the adaptability of a system increases with each increase
in the number of traverses. With a larger number of traverses, a system can be
relatively small and yet, produce a higher degree of adaptive/intelligent
behavior than a system with a lower number of traverses. The present analyses
indicate that the two well-known traverses-neural plasticity and neural
activity-are not sufficient to explain human mental capabilities. At least one
additional traverse is needed, which is named anapoiesis for its contribution
in reconstructing knowledge e.g., from long-term memory into working memory.
The conclusions bear implications for brain theory, the mind-body explanatory
gap, and developments of artificial intelligence technologies.Comment: Revised version in response to reviewer comment
- …