620,063 research outputs found
Recommended from our members
Product Launches with Biased Reviewers: The Importance of Not Being Earnest
The standard simple sequential herding model is altered to allow a firm with a new product to have it reviewed publicly before launch. Reviewers are either inherently pessimistic, optimistic or unbiased. We find the counter-intuitive result that a firm with a good product will prefer a pessimistic reviewer. Although firms with a bad product prefer unbiased reviewers, signalling considerations will force them to copy the choice of the good product firm in order to avoid revealing product type. This asymmetric impact provides a strong explanation for the stylized fact that reviewers are often viewed as being very critical
Comparing Feature Detectors: A bias in the repeatability criteria, and how to correct it
Most computer vision application rely on algorithms finding local
correspondences between different images. These algorithms detect and compare
stable local invariant descriptors centered at scale-invariant keypoints.
Because of the importance of the problem, new keypoint detectors and
descriptors are constantly being proposed, each one claiming to perform better
(or to be complementary) to the preceding ones. This raises the question of a
fair comparison between very diverse methods. This evaluation has been mainly
based on a repeatability criterion of the keypoints under a series of image
perturbations (blur, illumination, noise, rotations, homotheties, homographies,
etc). In this paper, we argue that the classic repeatability criterion is
biased towards algorithms producing redundant overlapped detections. To
compensate this bias, we propose a variant of the repeatability rate taking
into account the descriptors overlap. We apply this variant to revisit the
popular benchmark by Mikolajczyk et al., on classic and new feature detectors.
Experimental evidence shows that the hierarchy of these feature detectors is
severely disrupted by the amended comparator.Comment: Fixed typo in affiliation
Contributions of Herodotus to African History
This paper focuses on the contributions of Herodotus to African historiography Its aim is to define justify and affirm the importance of Herodotus in African History Being a librarybased study its data is mainly obtained from secondary sources and from discussions with historians A purely historical research method was adopted so as to gain deeper understanding of the pertinent issues involved in African historiography The historical data was evaluated utilizing external and internal criticism Herodotus was one man who did not subscribe to the biased writing about Africa If by scientific knowledge scholars can eliminate all forms of frustrations which victimize people particularly Africans the sincere rapprochement of mankind to create a true humanity will be fostered as argued by Cheikh Anta Diop in the reconstruction of African history The Euro-centric view about lack of history in Africa is biased and one of the classical writers Herodotus tried to argue a case for African s rich historical backgroun
Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes
Visually predicting the stability of block towers is a popular task in the
domain of intuitive physics. While previous work focusses on prediction
accuracy, a one-dimensional performance measure, we provide a broader analysis
of the learned physical understanding of the final model and how the learning
process can be guided. To this end, we introduce neural stethoscopes as a
general purpose framework for quantifying the degree of importance of specific
factors of influence in deep neural networks as well as for actively promoting
and suppressing information as appropriate. In doing so, we unify concepts from
multitask learning as well as training with auxiliary and adversarial losses.
We apply neural stethoscopes to analyse the state-of-the-art neural network for
stability prediction. We show that the baseline model is susceptible to being
misled by incorrect visual cues. This leads to a performance breakdown to the
level of random guessing when training on scenarios where visual cues are
inversely correlated with stability. Using stethoscopes to promote meaningful
feature extraction increases performance from 51% to 90% prediction accuracy.
Conversely, training on an easy dataset where visual cues are positively
correlated with stability, the baseline model learns a bias leading to poor
performance on a harder dataset. Using an adversarial stethoscope, the network
is successfully de-biased, leading to a performance increase from 66% to 88%
Forensic identification: the Island Problem and its generalisations
In forensics it is a classical problem to determine, when a suspect
shares a property with a criminal , the probability that . In
this paper we give a detailed account of this problem in various degrees of
generality. We start with the classical case where the probability of having
, as well as the a priori probability of being the criminal, is the
same for all individuals. We then generalize the solution to deal with
heterogeneous populations, biased search procedures for the suspect,
-correlations, uncertainty about the subpopulation of the criminal and
the suspect, and uncertainty about the -frequencies. We also consider
the effect of the way the search for is conducted, in particular when this
is done by a database search. A returning theme is that we show that
conditioning is of importance when one wants to quantify the "weight" of the
evidence by a likelihood ratio. Apart from these mathematical issues, we also
discuss the practical problems in applying these issues to the legal process.
The posterior probabilities of are typically the same for all reasonable
choices of the hypotheses, but this is not the whole story. The legal process
might force one to dismiss certain hypotheses, for instance when the relevant
likelihood ratio depends on prior probabilities. We discuss this and related
issues as well. As such, the paper is relevant both from a theoretical and from
an applied point of view
Clustering consistency in neuroimaging data analysis
Clustering techniques have been applied to neuroscience data analysis for decades. New algorithms keep being developed and applied to address different problems. However, when it comes to the applications of clustering, it is often hard to select the appropriate algorithm and evaluate the quality of clustering results due to the unknown ground truth. It is also the case that conclusions might be biased based on only one specific algorithm because each algorithm has its own assumption of the structure of the data, which might not be the same as the real data. In this paper, we explore the benefits of integrating the clustering results from multiple clustering algorithms by a tunable consensus clustering strategy and demonstrate the importance and necessity of consistency in neuroimaging data analysis
- …