2,821 research outputs found
Automatic recognition of fingerspelled words in British Sign Language
We investigate the problem of recognizing words from
video, fingerspelled using the British Sign Language (BSL)
fingerspelling alphabet. This is a challenging task since the
BSL alphabet involves both hands occluding each other, and
contains signs which are ambiguous from the observer’s
viewpoint. The main contributions of our work include:
(i) recognition based on hand shape alone, not requiring
motion cues; (ii) robust visual features for hand shape
recognition; (iii) scalability to large lexicon recognition
with no re-training.
We report results on a dataset of 1,000 low quality webcam
videos of 100 words. The proposed method achieves a
word recognition accuracy of 98.9%
On the Expressiveness of LARA: A Unified Language for Linear and Relational Algebra
We study the expressive power of the Lara language - a recently proposed unified model for expressing relational and linear algebra operations - both in terms of traditional database query languages and some analytic tasks often performed in machine learning pipelines. We start by showing Lara to be expressive complete with respect to first-order logic with aggregation. Since Lara is parameterized by a set of user-defined functions which allow to transform values in tables, the exact expressive power of the language depends on how these functions are defined. We distinguish two main cases depending on the level of genericity queries are enforced to satisfy. Under strong genericity assumptions the language cannot express matrix convolution, a very important operation in current machine learning operations. This language is also local, and thus cannot express operations such as matrix inverse that exhibit a recursive behavior. For expressing convolution, one can relax the genericity requirement by adding an underlying linear order on the domain. This, however, destroys locality and turns the expressive power of the language much more difficult to understand. In particular, although under complexity assumptions the resulting language can still not express matrix inverse, a proof of this fact without such assumptions seems challenging to obtain
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
Artificial Intelligence (AI) systems are increasingly used in high-stakes
domains of our life, increasing the need to explain these decisions and to make
sure that they are aligned with how we want the decision to be made. The field
of Explainable AI (XAI) has emerged in response. However, it faces a
significant challenge known as the disagreement problem, where multiple
explanations are possible for the same AI decision or prediction. While the
existence of the disagreement problem is acknowledged, the potential
implications associated with this problem have not yet been widely studied.
First, we provide an overview of the different strategies explanation providers
could deploy to adapt the returned explanation to their benefit. We make a
distinction between strategies that attack the machine learning model or
underlying data to influence the explanations, and strategies that leverage the
explanation phase directly. Next, we analyse several objectives and concrete
scenarios the providers could have to engage in this behavior, and the
potential dangerous consequences this manipulative behavior could have on
society. We emphasize that it is crucial to investigate this issue now, before
these methods are widely implemented, and propose some mitigation strategies
Enhanced Content-Based Fake News Detection Methods with Context-Labeled News Sources
This work examined the relative effectiveness of multilayer perceptron, random forest, and multinomial naïve Bayes classifiers, trained using bag of words and term frequency-inverse dense frequency transformations of documents in the Fake News Corpus and Fake and Real News Dataset. The goal of this work was to help meet the formidable challenges posed by proliferation of fake news to society, including the erosion of public trust, disruption of social harmony, and endangerment of lives. This training included the use of context-categorized fake news in an effort to enhance the tools’ effectiveness. It was found that term frequency-inverse dense frequency provided more accurate results than bag of words across all evaluation metrics for identifying fake news instances, and that the Fake News Corpus provided much higher result metrics than the Fake and Real News Dataset. In comparison to state-of-the-art methods the models performed as expected
Aligning Robot and Human Representations
To act in the world, robots rely on a representation of salient task aspects:
for example, to carry a coffee mug, a robot may consider movement efficiency or
mug orientation in its behavior. However, if we want robots to act for and with
people, their representations must not be just functional but also reflective
of what humans care about, i.e. they must be aligned. We observe that current
learning approaches suffer from representation misalignment, where the robot's
learned representation does not capture the human's representation. We suggest
that because humans are the ultimate evaluator of robot performance, we must
explicitly focus our efforts on aligning learned representations with humans,
in addition to learning the downstream task. We advocate that current
representation learning approaches in robotics should be studied from the
perspective of how well they accomplish the objective of representation
alignment. We mathematically define the problem, identify its key desiderata,
and situate current methods within this formalism. We conclude by suggesting
future directions for exploring open challenges.Comment: 14 pages, 3 figures, 1 tabl
- …