24,834 research outputs found
Bipolarity in ear biometrics
Identifying people using their biometric data is a problem that is getting increasingly more attention. This paper investigates a method that allows the matching of people in the context of victim identification by using their ear biometric data. A high quality picture (taken professionally) is matched against a set of low quality pictures (family albums). In this paper soft computing methods are used to model different kinds of uncertainty that arise when manually annotating the pictures. More specifically, we study the use of bipolar satisfaction degrees to explicitly handle the bipolar information about the available ear biometrics
Constraint-wish and satisfied-dissatisfied: an overview of two approaches for dealing with bipolar querying
In recent years, there has been an increasing interest in dealing with user preferences in flexible database querying, expressing both positive and negative information in a heterogeneous way. This is what is usually referred to as bipolar database querying. Different frameworks have been introduced to deal with such bipolarity. In this chapter, an overview of two approaches is given. The first approach is based on mandatory and desired requirements. Hereby the complement of a mandatory requirement can be considered as a specification of what is not desired at all. So, mandatory requirements indirectly contribute to negative information (expressing what the user does not want to retrieve), whereas desired requirements can be seen as positive information (expressing what the user prefers to retrieve). The second approach is directly based on positive requirements (expressing what the user wants to retrieve), and negative requirements (expressing what the user does not want to retrieve). Both approaches use pairs of satisfaction degrees as the underlying framework but have different semantics, and thus also different operators for criteria evaluation, ranking, aggregation, etc
Combining quantifications for flexible query result ranking
Databases contain data and database systems governing such databases are often intended to allow a user to query these data. On one hand, these data may be subject to imperfections, on the other hand, users may employ imperfect query preference specifications to query such databases. All of these imperfections lead to each query answer being accompanied by a collection of quantifications indicating how well (part of) a group of data complies with (part of) the user's query. A fundamental question is how to present the user with the query answers complying best to his or her query preferences. The work presented in this paper first determines the difficulties to overcome in reaching such presentation. Mainly, a useful presentation needs the ranking of the query answers based on the aforementioned quantifications, but it seems advisable to not combine quantifications with different interpretations. Thus, the work presented in this paper continues to introduce and examine a novel technique to determine a query answer ranking. Finally, a few aspects of this technique, among which its computational efficiency, are discussed
Capacities and Games on Lattices: A Survey of Result
We provide a survey of recent developments about capacities (or fuzzy measures) and ccoperative games in characteristic form, when they are defined on more general structures than the usual power set of the universal set, namely lattices. In a first part, we give various possible interpretations and applications of these general concepts, and then we elaborate about the possible definitions of usual tools in these theories, such as the Choquet integral, the Möbius transform, and the Shapley value.capacity, fuzzy measure, game, lattice, Choquet integral,Shapley value
Fast Decoder for Overloaded Uniquely Decodable Synchronous Optical CDMA
In this paper, we propose a fast decoder algorithm for uniquely decodable
(errorless) code sets for overloaded synchronous optical code-division
multiple-access (O-CDMA) systems. The proposed decoder is designed in a such a
way that the users can uniquely recover the information bits with a very simple
decoder, which uses only a few comparisons. Compared to maximum-likelihood (ML)
decoder, which has a high computational complexity for even moderate code
lengths, the proposed decoder has much lower computational complexity.
Simulation results in terms of bit error rate (BER) demonstrate that the
performance of the proposed decoder for a given BER requires only 1-2 dB higher
signal-to-noise ratio (SNR) than the ML decoder.Comment: arXiv admin note: substantial text overlap with arXiv:1806.0395
The Error-Pattern-Correcting Turbo Equalizer
The error-pattern correcting code (EPCC) is incorporated in the design of a
turbo equalizer (TE) with aim to correct dominant error events of the
inter-symbol interference (ISI) channel at the output of its matching Viterbi
detector. By targeting the low Hamming-weight interleaved errors of the outer
convolutional code, which are responsible for low Euclidean-weight errors in
the Viterbi trellis, the turbo equalizer with an error-pattern correcting code
(TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the
conventional non-precoded TE, especially for high rate applications. A
maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for
a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise
ratio (SNR) gain for various channel conditions and design parameters. In
addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is
compared to demonstrate the present TE's superiority for short interleaver
lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE
Transactions on Information Theory titled: "Facets of Coding Theory: from
Algorithms to Networks". This work was supported in part by the NSF
Theoretical Foundation Grant 0728676
Aggregation on bipolar scales
The paper addresses the problem of extending aggregation operators typically defined on to the symmetric interval , where the ``0'' value plays a particular role (neutral value). We distinguish the cases where aggregation operators are associative or not. In the former case, the ``0'' value may play the role of neutral or absorbant element, leading to pseudo-addition and pseudo-multiplication. We address also in this category the special case of minimum and maximum defined on some finite ordinal scale. In the latter case, we find that a general class of extended operators can be defined using an interpolation approach, supposing the value of the aggregation to be known for ternary vectors.bipolar scale; bi-capacity; aggregation
- …