19,746 research outputs found
Ant-colony-based multiuser detection for multifunctional-antenna-array-assisted MC DS-CDMA systems
A novel Ant Colony Optimization (ACO) based Multi-User Detector (MUD) is designed for the synchronous Multi-Functional Antenna Array (MFAA) assisted Multi-Carrier Direct-Sequence Code-Division Multiple-Access (MC DS-CDMA) uplink (UL), which supports both receiver diversity and receiver beamforming. The ACO-based MUD aims for achieving a bit-error-rate (BER) performance approaching that of the optimum maximum likelihood (ML) MUD, without carrying out an exhaustive search of the entire MC DS-CDMA search space constituted by all possible combinations of the received multi-user vectors. We will demonstrate that regardless of the number of the subcarriers or of the MFAA configuration, the system employing the proposed ACO based MUD is capable of supporting 32 users with the aid of 31-chip Gold codes used as the T-domain spreading sequence without any significant performance degradation compared to the single-user system. As a further benefit, the number of floating point operations per second (FLOPS) imposed by the proposed ACO-based MUD is a factor of 108 lower than that of the ML MUD. We will also show that at a given increase of the complexity, the MFAA will allow the ACO based MUD to achieve a higher SNR gain than the Single-Input Single-Output (SISO) MC DS-CDMA system. Index TermsâAnt Colony Optimization, Multi-User Detector, Multi-Functional Antenna Array, Multi-Carrier Direct-Sequence Code-Division Multiple-Access, Uplink, Near-Maximum Likelihood Detection
Representing numeric data in 32 bits while preserving 64-bit precision
Data files often consist of numbers having only a few significant decimal
digits, whose information content would allow storage in only 32 bits. However,
we may require that arithmetic operations involving these numbers be done with
64-bit floating-point precision, which precludes simply representing the data
as 32-bit floating-point values. Decimal floating point gives a compact and
exact representation, but requires conversion with a slow division operation
before it can be used. Here, I show that interesting subsets of 64-bit
floating-point values can be compactly and exactly represented by the 32 bits
consisting of the sign, exponent, and high-order part of the mantissa, with the
lower-order 32 bits of the mantissa filled in by table lookup, indexed by bits
from the part of the mantissa retained, and possibly from the exponent. For
example, decimal data with 4 or fewer digits to the left of the decimal point
and 2 or fewer digits to the right of the decimal point can be represented in
this way using the lower-order 5 bits of the retained part of the mantissa as
the index. Data consisting of 6 decimal digits with the decimal point in any of
the 7 positions before or after one of the digits can also be represented this
way, and decoded using 19 bits from the mantissa and exponent as the index.
Encoding with such a scheme is a simple copy of half the 64-bit value, followed
if necessary by verification that the value can be represented, by checking
that it decodes correctly. Decoding requires only extraction of index bits and
a table lookup. Lookup in a small table will usually reference cache; even with
larger tables, decoding is still faster than conversion from decimal floating
point with a division operation. I discuss how such schemes perform on recent
computer systems, and how they might be used to automatically compress large
arrays in interpretive languages such as R
Decoding billions of integers per second through vectorization
In many important applications -- such as search engines and relational
database systems -- data is stored in the form of arrays of integers. Encoding
and, most importantly, decoding of these arrays consumes considerable CPU time.
Therefore, substantial effort has been made to reduce costs associated with
compression and decompression. In particular, researchers have exploited the
superscalar nature of modern processors and SIMD instructions. Nevertheless, we
introduce a novel vectorized scheme called SIMD-BP128 that improves over
previously proposed vectorized approaches. It is nearly twice as fast as the
previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the
same time, SIMD-BP128 saves up to 2 bits per integer. For even better
compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has
a compression ratio within 10% of a state-of-the-art scheme (Simple-8b) while
being two times faster during decoding.Comment: For software, see https://github.com/lemire/FastPFor, For data, see
http://boytsov.info/datasets/clueweb09gap
Classification software technique assessment
A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art
- âŠ