23,673 research outputs found

    A foundation for machine learning in design

    Get PDF
    This paper presents a formalism for considering the issues of learning in design. A foundation for machine learning in design (MLinD) is defined so as to provide answers to basic questions on learning in design, such as, "What types of knowledge can be learnt?", "How does learning occur?", and "When does learning occur?". Five main elements of MLinD are presented as the input knowledge, knowledge transformers, output knowledge, goals/reasons for learning, and learning triggers. Using this foundation, published systems in MLinD were reviewed. The systematic review presents a basis for validating the presented foundation. The paper concludes that there is considerable work to be carried out in order to fully formalize the foundation of MLinD

    Monotone Pieces Analysis for Qualitative Modeling

    Get PDF
    It is a crucial task to build qualitative models of industrial applications for model-based diagnosis. A Model Abstraction procedure is designed to automatically transform a quantitative model into qualitative model. If the data is monotone, the behavior can be easily abstracted using the corners of the bounding rectangle. Hence, many existing model abstraction approaches rely on monotonicity. But it is not a trivial problem to robustly detect monotone pieces from scattered data obtained by numerical simulation or experiments. This paper introduces an approach based on scale-dependent monotonicity: the notion that monotonicity can be defined relative to a scale. Real-valued functions defined on a finite set of reals e.g. simulation results, can be partitioned into quasi-monotone segments. The end points for the monotone segments are used as the initial set of landmarks for qualitative model abstraction. The qualitative model abstraction works as an iteratively refining process starting from the initial landmarks. The monotonicity analysis presented here can be used in constructing many other kinds of qualitative models; it is robust and computationally efficient

    Algebraic foundations for qualitative calculi and networks

    Full text link
    A qualitative representation ϕ\phi is like an ordinary representation of a relation algebra, but instead of requiring (a;b)ϕ=aÏ•âˆŁbϕ(a; b)^\phi = a^\phi | b^\phi, as we do for ordinary representations, we only require that cϕ⊇aÏ•âˆŁbϕ  âŸș  c≄a;bc^\phi\supseteq a^\phi | b^\phi \iff c\geq a ; b, for each cc in the algebra. A constraint network is qualitatively satisfiable if its nodes can be mapped to elements of a qualitative representation, preserving the constraints. If a constraint network is satisfiable then it is clearly qualitatively satisfiable, but the converse can fail. However, for a wide range of relation algebras including the point algebra, the Allen Interval Algebra, RCC8 and many others, a network is satisfiable if and only if it is qualitatively satisfiable. Unlike ordinary composition, the weak composition arising from qualitative representations need not be associative, so we can generalise by considering network satisfaction problems over non-associative algebras. We prove that computationally, qualitative representations have many advantages over ordinary representations: whereas many finite relation algebras have only infinite representations, every finite qualitatively representable algebra has a finite qualitative representation; the representability problem for (the atom structures of) finite non-associative algebras is NP-complete; the network satisfaction problem over a finite qualitatively representable algebra is always in NP; the validity of equations over qualitative representations is co-NP-complete. On the other hand we prove that there is no finite axiomatisation of the class of qualitatively representable algebras.Comment: 22 page

    Infinite-Duration Bidding Games

    Full text link
    Two-player games on graphs are widely studied in formal methods as they model the interaction between a system and its environment. The game is played by moving a token throughout a graph to produce an infinite path. There are several common modes to determine how the players move the token through the graph; e.g., in turn-based games the players alternate turns in moving the token. We study the {\em bidding} mode of moving the token, which, to the best of our knowledge, has never been studied in infinite-duration games. The following bidding rule was previously defined and called Richman bidding. Both players have separate {\em budgets}, which sum up to 11. In each turn, a bidding takes place: Both players submit bids simultaneously, where a bid is legal if it does not exceed the available budget, and the higher bidder pays his bid to the other player and moves the token. The central question studied in bidding games is a necessary and sufficient initial budget for winning the game: a {\em threshold} budget in a vertex is a value t∈[0,1]t \in [0,1] such that if Player 11's budget exceeds tt, he can win the game, and if Player 22's budget exceeds 1−t1-t, he can win the game. Threshold budgets were previously shown to exist in every vertex of a reachability game, which have an interesting connection with {\em random-turn} games -- a sub-class of simple stochastic games in which the player who moves is chosen randomly. We show the existence of threshold budgets for a qualitative class of infinite-duration games, namely parity games, and a quantitative class, namely mean-payoff games. The key component of the proof is a quantitative solution to strongly-connected mean-payoff bidding games in which we extend the connection with random-turn games to these games, and construct explicit optimal strategies for both players.Comment: A short version appeared in CONCUR 2017. The paper is accepted to JAC

    Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

    Full text link
    We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.Comment: CVPR 201
    • 

    corecore