13,474 research outputs found
A practical guide and software for analysing pairwise comparison experiments
Most popular strategies to capture subjective judgments from humans involve
the construction of a unidimensional relative measurement scale, representing
order preferences or judgments about a set of objects or conditions. This
information is generally captured by means of direct scoring, either in the
form of a Likert or cardinal scale, or by comparative judgments in pairs or
sets. In this sense, the use of pairwise comparisons is becoming increasingly
popular because of the simplicity of this experimental procedure. However, this
strategy requires non-trivial data analysis to aggregate the comparison ranks
into a quality scale and analyse the results, in order to take full advantage
of the collected data. This paper explains the process of translating pairwise
comparison data into a measurement scale, discusses the benefits and
limitations of such scaling methods and introduces a publicly available
software in Matlab. We improve on existing scaling methods by introducing
outlier analysis, providing methods for computing confidence intervals and
statistical testing and introducing a prior, which reduces estimation error
when the number of observers is low. Most of our examples focus on image
quality assessment.Comment: Code available at https://github.com/mantiuk/pwcm
Multiple Beamforming with Perfect Coding
Perfect Space-Time Block Codes (PSTBCs) achieve full diversity, full rate,
nonvanishing constant minimum determinant, uniform average transmitted energy
per antenna, and good shaping. However, the high decoding complexity is a
critical issue for practice. When the Channel State Information (CSI) is
available at both the transmitter and the receiver, Singular Value
Decomposition (SVD) is commonly applied for a Multiple-Input Multiple-Output
(MIMO) system to enhance the throughput or the performance. In this paper, two
novel techniques, Perfect Coded Multiple Beamforming (PCMB) and Bit-Interleaved
Coded Multiple Beamforming with Perfect Coding (BICMB-PC), are proposed,
employing both PSTBCs and SVD with and without channel coding, respectively.
With CSI at the transmitter (CSIT), the decoding complexity of PCMB is
substantially reduced compared to a MIMO system employing PSTBC, providing a
new prospect of CSIT. Especially, because of the special property of the
generation matrices, PCMB provides much lower decoding complexity than the
state-of-the-art SVD-based uncoded technique in dimensions 2 and 4. Similarly,
the decoding complexity of BICMB-PC is much lower than the state-of-the-art
SVD-based coded technique in these two dimensions, and the complexity gain is
greater than the uncoded case. Moreover, these aforementioned complexity
reductions are achieved with only negligible or modest loss in performance.Comment: accepted to journa
Probing context-dependent errors in quantum processors
Gates in error-prone quantum information processors are often modeled using
sets of one- and two-qubit process matrices, the standard model of quantum
errors. However, the results of quantum circuits on real processors often
depend on additional external "context" variables. Such contexts may include
the state of a spectator qubit, the time of data collection, or the temperature
of control electronics. In this article we demonstrate a suite of simple,
widely applicable, and statistically rigorous methods for detecting context
dependence in quantum circuit experiments. They can be used on any data that
comprise two or more "pools" of measurement results obtained by repeating the
same set of quantum circuits in different contexts. These tools may be
integrated seamlessly into standard quantum device characterization techniques,
like randomized benchmarking or tomography. We experimentally demonstrate these
methods by detecting and quantifying crosstalk and drift on the publicly
accessible 16-qubit ibmqx3.Comment: 11 pages, 3 figures, code and data available in source file
Designing labeled graph classifiers by exploiting the R\'enyi entropy of the dissimilarity representation
Representing patterns as labeled graphs is becoming increasingly common in
the broad field of computational intelligence. Accordingly, a wide repertoire
of pattern recognition tools, such as classifiers and knowledge discovery
procedures, are nowadays available and tested for various datasets of labeled
graphs. However, the design of effective learning procedures operating in the
space of labeled graphs is still a challenging problem, especially from the
computational complexity viewpoint. In this paper, we present a major
improvement of a general-purpose classifier for graphs, which is conceived on
an interplay between dissimilarity representation, clustering,
information-theoretic techniques, and evolutionary optimization algorithms. The
improvement focuses on a specific key subroutine devised to compress the input
data. We prove different theorems which are fundamental to the setting of the
parameters controlling such a compression operation. We demonstrate the
effectiveness of the resulting classifier by benchmarking the developed
variants on well-known datasets of labeled graphs, considering as distinct
performance indicators the classification accuracy, computing time, and
parsimony in terms of structural complexity of the synthesized classification
models. The results show state-of-the-art standards in terms of test set
accuracy and a considerable speed-up for what concerns the computing time.Comment: Revised versio
- …