625 research outputs found
Lessons learned from the design of a mobile multimedia system in the Moby Dick project
Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project
Efficient Deformable Shape Correspondence via Kernel Matching
We present a method to match three dimensional shapes under non-isometric
deformations, topology changes and partiality. We formulate the problem as
matching between a set of pair-wise and point-wise descriptors, imposing a
continuity prior on the mapping, and propose a projected descent optimization
procedure inspired by difference of convex functions (DC) programming.
Surprisingly, in spite of the highly non-convex nature of the resulting
quadratic assignment problem, our method converges to a semantically meaningful
and continuous mapping in most of our experiments, and scales well. We provide
preliminary theoretical analysis and several interpretations of the method.Comment: Accepted for oral presentation at 3DV 2017, including supplementary
materia
Beyond Hypergraph Dualization
International audienceThis problem concerns hypergraph dualization and generalization to poset dualization. A hypergraph H = (V, E) consists of a finite collection E of sets over a finite set V , i.e. E â P(V) (the powerset of V). The elements of E are called hyperedges, or simply edges. A hypergraph is said simple if none of its edges is contained within another. A transversal (or hitting set) of H is a set T â V that intersects every edge of E. A transversal is minimal if it does not contain any other transversal as a subset. The set of all minimal transversal of H is denoted by T r(H). The hypergraph (V, T r(H)) is called the transversal hypergraph of H. Given a simple hypergraph H, the hypergraph dualization problem (Trans-Enum for short) concerns the enumeration without repetitions of T r(H). The Trans-Enum problem can also be formulated as a dualization problem in posets. Let (P, â€) be a poset (i.e. †is a reflexive, antisymmetric, and transitive relation on the set P). For A â P , â A (resp. â A) is the downward (resp. upward) closure of A under the relation †(i.e. â A is an ideal and â A a filter of (P, â€)). Two antichains (B + , B â) of P are said to be dual if â B + âȘ â B â = P and â B + â© â B â = â
. Given an implicit description of a poset P and an antichain B + (resp. B â) of P , the poset dualization problem (Dual-Enum for short) enumerates the set B â (resp. B +), denoted by Dual(B +) = B â (resp. Dual(B â) = B +). Notice that the function dual is self-dual or idempotent, i.e. Dual(Dual(B)) = B
The Inadequacy of Shapley Values for Explainability
This paper develops a rigorous argument for why the use of Shapley values in
explainable AI (XAI) will necessarily yield provably misleading information
about the relative importance of features for predictions. Concretely, this
paper demonstrates that there exist classifiers, and associated predictions,
for which the relative importance of features determined by the Shapley values
will incorrectly assign more importance to features that are provably
irrelevant for the prediction, and less importance to features that are
provably relevant for the prediction. The paper also argues that, given recent
complexity results, the existence of efficient algorithms for the computation
of rigorous feature attribution values in the case of some restricted classes
of classifiers should be deemed unlikely at best
- âŠ